Monday, December 28, 2009

What happens when you browse to a web site

This is a perennial favorite in technical interviews: "so you type '' in your favorite web browser. In as much detail as you can, tell me what happens."
Let's assume that we do this on a Linux (or other AT&T System V UNIX system). Here's what happens, in enough detail to make your eyes bleed.
  1. Your web browser invokes the gethostbyname() function to turn the hostname you entered into an IP address. (We'll get back to how this happens, exactly, in a moment.)
  2. Your web browser consults the /etc/services file to determine what well-known port HTTP resides on, and finds 80.
  3. Two more pieces of information are determined by Linux so that your browser can initiate a connection: your local IP address, and an ephemeral port number. Combined with the destination (server) IP address and port number, these four pieces of information represent what is called an Internet PCB, or protocol control block. Furthermore, the IANA defines the port range 49,152 through 65,535 for use in this capacity. Exactly how a port number is chosen from this range depends upon the Linux kernel version. The most common allocation algorithm is to simply remember the last-allocated number, and increment it by one each time a new PCB is requested. When 65,535 is reached, the algorithm loops around to 49,152. (This has certain negative security implications, and is addressed in more detail in Port Randomization by Larsen, Ericsson, et al, 2007.) Also see TCP/IP Illustrated, Volume 2: The Implementation by Wright and Stevens, 1995.
  4. Your web browser sends an HTTP GET request to the remote server. Be careful here, as you must remember that your web browser does not speak TCP, nor does it speak IP. It only speaks HTTP. It doesn't care about the transport protocol that gets its HTTP GET request to the server, nor how the server gets its answer back to it.
  5. The HTTP packet passes down the four-layer model that TCP/IP uses, from the application layer where your browser resides to the transport layer. This is a connectionless layer, with addressing based upon URLs, or uniform resource locations.
  6. The transport layer encapsulates the HTTP request inside TCP (transmission control protocol. Transport layer for transmission control, makes sense, right?) The TCP packet is then passed down to the second layer, the network layer. This is a connection-based or persistent layer, with addressing based upon port numbers. TCP does not care about IP addresses, only that some specific port on the client side is bound to a specific port on the server side.
  7. The network layer uses IP (Internet protocol), and adds an IP header to the TCP packet. The packet is then passed down to the first layer, the link layer. This is a connectionless or best-effort layer, with addressing based upon 32-bit IP addresses. Routing, but not switching, occurs at this layer.
  8. The link layer uses the Ethernet protocol. This is a connectionless layer, with addressing based upon 48-bit Ethernet addresses. Switching occurs at this layer.
  9. The kernel must determine what connection over which to send the packet. This happens by taking the IP address and consulting the routing table (seen by running netstat -rn.) First, the kernel attempts to match the destination by host address. (For example, if you have a specific route to just the one host you're trying to reach in your browser.) If this fails, then network address matching is tried. (For example, if you have a specific route to the network in which the host you're trying to reach resides.) Lastly, the kernel searches for a default route entry. This is the most common case.
  10. Now that the kernel knows the next hop, that is, the node that the packet should be handed off to, the kernel must make a physical connection to it. Routing depends upon each node in the chain having a literal electrical connection to the next node; it doesn't matter how many nodes (or hops) the packet must pass through so long as each and every one can "see" its neighbor. This is handled on the link layer, which if you'll recall uses a different addressing scheme than IP addresses. This is where ARP, or the address resolution protocol, comes into play. Let's say your machine is, and the default gateway is The kernel will send an ARP broadcast which says, "Who has Tell" The default gateway machine will see the ARP request and reply, saying "Hey, is 8:0:20:4:3f:2a." The kernel places the answer in the ARP cache, which can be viewed by running arp -a. Now that this information is known, the kernel adds an Ethernet header to the packet, and places it on the wire.
  11. The default gateway receives the packet. First, it checks to see if the Ethernet address matches its own. If it does not, the packet is silently discarded (unless the interface is in promiscuous mode.) Next, it checks to see if the destination IP address matches any of its configured interfaces. In our scenario here, it does not: remember that the packet is being routed to another destination by way of this gateway. So the gateway now checks to see if it is configured to permit IP forwarding. If it is not, the packet is silently discarded. We'll assume the gateway is configured to forward IP, so now it must determine what to do with the packet. It consults its routing table, and attempts to match the destination in the same way our web browser system did a moment ago: exact host match first, then network, then default gateway. Yes, a default gateway server can itself have a default gateway. It also uses ARP in the same way as we saw a moment ago in order to reach the next hop, and pass the packet on to it. Before doing so, however, it decrements the TTL (time-to-live) field in the packet, and if it becomes 1 or 0, discards the packet and sends an ICMP TTL expired in transit message back to the sender. Each hop along the way does the same thing. Also, if the packet came in on the same interface that the gateway's routing table says the packet should go out over to reach the next hop, an ICMP redirect message is sent to the sender, instructing it to bypass this gateway and directly contact the next hop on all subsequent packets. You'll know if this happened because a new route will appear in your web browser machine's routing table.
  12. Each hop passes the packet along, until at the destination the last router notices that it has a direct route to the destination, that is, a routing table entry is matched that is not another router. The packet is then delivered to the destination server.
  13. The destination server notices that at long last the IP address in the packet is its own, that is, it resolves via ARP to the Ethernet address of the server itself. Since it's not a forwarding case, and since the IP address matches, it now examines the TCP portion of the packet to determine the destination port. It also looks at the TCP header flags, and since this is the first packet, observes that only the SYN (synchronize) flag is set. Thus, this first packet is one of three in the TCP handshake process. If the port the packet is addressed to (in our case, port 80) is not bound by a process (for example, if Apache crashed) then an ICMP port unreachable message is sent to the sender and the packet is discarded. If the port is valid, and we'll assume it is, a TCP reply is sent, with both the SYN and ACK (acknowledge) flags set.
  14. The packet passes back through the various routers, and unless source routing is specified, the path back may differ from the path used to first reach the server. The client (the machine running your web browser) receives the packet, notices that it has the SYN and ACK flags set, and contains IP and port information that matches a known PCB. It replies with a TCP packet that has only the ACK flag set.
  15. This packet reaches the server, and the server moves the connection from PENDING to ESTABLISHED. Using the mechanisms of TCP, the server now guarantees data delivery between itself and the client until such time as the connection times out, or is closed by either side. This differs sharply from UDP, where there is no handshake process and packet delivery is not guaranteed, it is only best-effort and left up to the application to figure out if the packets go there or not.
  16. Now that we have a live TCP connection, the HTTP request that started all of this may be sent over the connection to the server for processing. Depending on whether or not the HTTP server (and client) supports such, the reply may consist of only a single object (usually the HTML page) and the connection closed. If persistence is enabled, then the connection is left open for subsequent HTTP requests (for example, all of the page elements, such as images, style sheets, etc.)
Okay, as I mentioned earlier, we will now address how the client resolves the hostname into an IP address using DNS. All of the above ARP and IP information holds true for the DNS query and replies.
  1. The gethostbyname() function must first determine how it should go about turning a hostname into an IP address. To accomplish this, it consults the /etc/nsswitch.conf file, and looks for a line beginning with hosts:. It then examines the keywords listed, and tries each of them in the order given. For the purposes of this example, we'll assume the pattern to be files dns nis.
  2. The keyword files instructs the kernel to consult the /etc/hosts file. Since the web server we're trying to reach doesn't have an entry there, the match attempt fails. The kernel checks to see if another resolution method exists, and if it does, it tries it.
  3. The next method is dns, so the kernel now consults the /etc/resolv.conf file to determine what DNS server, or name resolver, it should contact.
  4. A UDP request is sent to the first-listed name server, addressed to port 53.
  5. The DNS server receives the request. It examines it to determine if it is authoritative for the requested domain; that is, does it directly serve answers for the domain? If not, then it checks to see if recursion is permitted for the client that sent the request.
  6. If recursion is permitted, then the DNS server consults its hints file (often called for the appropriate root DNS server to talk to. It then sends a DNS request to the root server, asking it for the authoritative server for this domain. The root domain server replies with a third DNS server's name, the authoritative DNS server for the domain. This is the server that is listed when you perform a whois on the domain name.
  7. The DNS server now contacts the authoritative DNS server, and asks for the IP address of the given hostname. The answer is cached, and the answer is returned to the client.
  8. If recursion is not supported, then the DNS server simply replies with go away or go talk to a root server. The client is then responsible for carrying on, as follows.
  9. The client receives the negative response, and sends the same DNS request to a root DNS server.
  10. The root DNS server receives the query, and since it is not configured to support recursion, but is a root server, it responds with "go ask so-and-so, that's the authoritative server for that domain." Note that this is not the final answer, but is a definite improvement over a simple go away.
  11. The client now knows who to ask, and sends the original DNS query for a third time to the authoritative server. It replies with the IP address, and the lookup process is complete.
A few notes on things that I didn't want to clutter up the above narrative with:
  • When a network interface is first brought online, be it during boot or manually by the administrator, something called a gratuitous ARP request is broadcast. It literally asks, "who has Tell" This looks redundant at first glance, but it actually serves a dual purpose: it allows neighboring machines to cache the new IP to Ethernet address mapping, and if another machine already has that IP address, it will reply with a typical ARP response of "Hey, is 8:0:20:4:3f:2a." The first machine will then log an error message to the console saying "IP address is already in use by 8:0:20:4:3f:2a." This is done to communicate to you that your Excel spreadsheet of IP addresses is wrong and should be replaced with something a bit more accurate and reliable.
  • The Ethernet layer contains a lot more complexities than I detailed above. In particular, because only one machine can be talking over the wire at a time (literally due to electrical limitations) there are various mechanisms in place to prevent collisions. The most widely used is called CSMA/CD, or Carrier Sense Multiple Access with Collision Detection, where each network card is responsible for only transmitting when a wire clear carrier signal is present. Also, should two cards start transmitting at the exact same instant, all cards are responsible for detecting the collision and reporting it to the responsible cards. They then must stop transmitting and wait a random time interval before trying again. This is the main reason for network segmentation; the more hosts you have on a single wire, the more collisions you'll get; and the more collisions you get, the slower the overall network becomes.

Tuesday, December 22, 2009

howto install mod_security2 with apache2 in Ubuntu

here are many significant changes and enhancements in ModSecurity 2.x over the 1.x branch, including:
  • use core rules with various features
  • five processing phases: request headers, request body, response headers, response body, logging
  • per-rule transformation options (previously normalization was implicit and hard-coded). New transformation functions were added.
  • transaction variables. This can be used to store pieces of data, create a transaction anomaly score etc.
  • data persistence. It can be configured any way you want. Most people will want to use this feature to track IP addresses, application sessions, and application users).
  • support for anomaly scoring and basic event correlation (counters can be automatically decreased over time; variables can be expired).
  • support for web applications and session IDs.
  • regular expression back-references (allows one to create custom variables using transaction content).
  • many new functions that can be applied to the variables (where you could use only use regular expressions, previously).
  • XML support (parsing, validation, XPath).

Download mod_security

  • Download source from mod_security2 (you need to sign up to download).
There is currently no binary of mod_security 2.5.6 available for Ubuntu, so you need to compile it yourself.

Step by Step Ubuntu install guide

1) install g++ environment

apt-get install g++ doc-base autoconf automake1.9 bison bison libtool make

2) install preconditions for mod_security2

apt-get install apache2-threaded-dev libxml2-dev libcurl4-gnutls-dev
  • try to run configure with missing libraries or header files
    ./configure --with-apxs2=/usr/bin/apxs2
    checking for strtol... yes
    configure: looking for Apache module support via DSO through APXS
    configure: error: couldn't find APXS
  • install apache apxs
    apt-get install apache2-threaded-dev
  • next error with configure: missing libxml2
    checking for libxml2 config script... no
    configure: *** libxml2 library not found.
    configure: error: libxml2 library is required
  • install libxml2-dev
    sudo apt-get install libxml2-dev
  • next error with configure: missing libcurl
    • this step is optional, only needed if you want to build mlogc, id did it.
      checking for libcurl config script... no
      configure: *** curl library not found.
      configure: NOTE: curl library is only required for building mlogc
  • install libcurl4-gnutls-dev
    sudo apt-get install libcurl4-gnutls-dev

3) final configure works, run make now

cd ~/modsecurity-apache_2.5.6/apache2
./configure --with-apx2=/usr/bin/apxs2

checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables... 
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln -s works... yes
checking whether make sets $(MAKE)... yes
checking for ranlib... ranlib
checking for perl... /usr/bin/perl
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking for unistd.h... (cached) yes
checking for an ANSI C-conforming const... yes
checking for inline... inline
checking for C/C++ restrict keyword... __restrict
checking for size_t... yes
checking whether struct tm is in sys/time.h or time.h... time.h
checking for uint8_t... yes
checking for stdlib.h... (cached) yes
checking for GNU libc compatible malloc... yes
checking for working memcmp... yes
checking for atexit... yes
checking for fchmod... yes
checking for getcwd... yes
checking for memset... yes
checking for strcasecmp... yes
checking for strchr... yes
checking for strdup... yes
checking for strerror... yes
checking for strncasecmp... yes
checking for strrchr... yes
checking for strstr... yes
checking for strtol... yes
configure: looking for Apache module support via DSO through APXS
configure: found apxs at /usr/bin/apxs2
configure: checking httpd version
configure: httpd is recent enough
checking for libpcre config script... /usr/bin/pcre-config
configure: using '-L/usr/lib -lpcre' for pcre Library
checking for libapr config script... /usr/bin/apr-1-config
configure: using ' -luuid -lrt -lcrypt  -lpthread -ldl' for apr Library
checking for libapr-util config script... /usr/bin/apu-1-config
configure: using ' -L/usr/lib -laprutil-1' for apu Library
checking for libxml2 config script... /usr/bin/xml2-config
configure: using '-lxml2' for libxml Library
checking for pkg-config script for lua library... no
configure: optional lua library not found
checking for libcurl config script... /usr/bin/curl-config
configure: using '-lcurl -lgssapi_krb5' for curl Library
configure: creating ./config.status
config.status: creating Makefile
config.status: creating build/apxs-wrapper
config.status: creating t/
config.status: creating t/
config.status: creating t/
config.status: creating t/
config.status: creating t/regression/server_root/conf/httpd.conf
config.status: creating ../tools/
config.status: creating mod_security2_config.h

4) install mod_security2

i did this manual way to control what is installed,
of course you can use "make install".
cp modsecurity-apache_2.5.6/apache2/.libs/ /usr/lib/apache2/modules
chmod 644 /usr/lib/apache2/modules/
chown root:root /usr/lib/apache2/modules/

5) include mod_security2 in the apache2 way

/etc/apache2/mods-available# cat mod_security2.load 
LoadModule security2_module /usr/lib/apache2/modules/

/etc/apache2/mods-enabled# ln -s ../mods-available/mod_security2.load .

6) load apache2 mod_unique_id

  • run apachectl configtest and find the missing mod_unique_id error
    apachectl configtest
    less /var/log/apache2/error.log
    [Fri Aug 15 11:59:34 2008] [error] ModSecurity: ModSecurity requires mod_unique_id to be installed
  • fix it with a2enmod of make a manual symlink in mods-enabled
    a2enmod mod_unique_id

7) reload apache config

  • reload config and check error.log
    apachectl configtest
    apachectl graceful
    less /var/log/apache2/error.log
after reloading apache, make test to you webserver and check access.log and error.log

8) initial mod_security configuration

After initial installation of mod_security2 you can add mod_security2 rules. For example you can add a core rule, for example add rule to apache conf directory:
/etc/apache2/conf.d/mod_security2# ls

9) adopt log path

SecAuditLog /var/log/apache2/modsec_audit.log
SecDebugLog             /var/log/apache2/modsec_debug.log

10) example: set higher SecDebugLogLevel

# NOTE Debug logging is generally very slow. You should never
#      use values greater than "3" in production.
#      0 - no logging.
#      1 - errors (intercepted requests) only.
#      2 - warnings.
#      3 - notices // default value.
#      4 - details of how transactions are handled.
#      5 - as above, but including information about each piece of information handled.
#      9 - log everything, including very detailed debugging information.

SecDebugLogLevel        5

related posts

Tuesday, December 15, 2009

HOWTO : Make sure no rootkit on your Ubuntu server

To ensure your server will not be installed rootkits or trojans as well as worm without your approval, you should check it frequently.


Get the chkrootkit package :

sudo apt-get install chkrootkit

Make a Cron Job to do the scan daily at 0700 hours :

sudo crontab -e

0 7 * * * /usr/sbin/chkrootkit; /usr/sbin/chkrootkit -q 2 >&1 | mail -s "Daily ChkRootKit Scan"

Do a manual scan :

sudo /usr/sbin/chkrootkit

Rootkit Hunter (Optional)

sudo apt-get install rkhunter

Make a Cron Job to do the scan daily at 0500 hours :

sudo crontab -e

0 5 * * * rkhunter --cronjob --rwo | mail -s "Daily Rootkit Hunter Scan"

Do a manual scan :

sudo rkhunter --check

Forensic tool to find hidden processes and ports – unhide

Get the unhide package :

sudo apt-get install unhide

Make a Cron Job to do the scan daily between 0800 and 0930 hours :

sudo crontab -e

0 8 * * * unhide proc; unhide proc -q 2 >&1 | mail -s "Daily unhide proc Scan"

30 8 * * * unhide sys; unhide sys -q 2 >&1 | mail -s "Daily unhide sys Scan"

0 9 * * * unhide brute; unhide brute -q 2 >&1 | mail -s "Daily unhide brute Scan"

30 9 * * * unhide-tcp; unhide-tcp -q 2 >&1 | mail -s "Daily unhide-tcp Scan"

Do a manual scan :

sudo unhide proc
sudo unhide sys
sudo unhide brute
sudo unhide-tcp

Beware :
There will be produced some false positive by RootKit Hunter or ChkRootKit when your packages or files had been updated or have the similar behavior as the rootkit.

Remarks :
It is not 100% to proof that your system is away from the attack of Rootkits.

Securing MySQL on Linux


MySQL is a very popular open source database. Due to its speed and stability it is used on millions of servers world wide. MySQL has a simple and effective security mechanism, however, many measures need to be taken to make a default installation secure. Whilst the measures described below will enable you to secure your database it is also important that you secure the underlying operating system as much as possible too.


It is important to run MySQL as its own user. In order to do so we need to create such a user and group.

# groupadd mysql
# useradd -c "MySQL Server" -d /dev/null -g mysql -s /bin/false mysql

Install MySQL in /usr/local/mysql

./configure --prefix=/usr/local/mysql --with-mysqld-user=mysql \
--with-unix-socket-path=/tmp/mysql.sock --with-mysqld-ldflags=-all-static
make install
strip /usr/local/mysql/libexec/mysqld
chown -R root /usr/local/mysql
chown -R mysql /usr/local/mysql/var
chgrp -R mysql /usr/local/mysql

The configure option --with-mysqld-user=mysql enables MySQL to run as the mysql user. The --with-mysqld-ldflags=-all-static option makes it easier to chroot MySQL.

Copy the example configuration file from the MySQL source, support-files/my-medium.cnf, to /etc/my.cnf and set the appropriate permissions, chmod 644 /etc/my.cnf.

Once we have MySQL installed, test the installation. Start MySQL with /usr/local/mysql/bin/mysqld_safe & and log on as the root user, mysql -u root. If you see the MySQL prompt we know the database is running we can proceed to chroot it. If the installation is not working examine the log files to find out what the problem is. Otherwise shutdown the server, /usr/local/mysql/bin/mysqladmin -u root shutdown

Chrooting MySQL

First, create the necessary directory structure for the database.

mkdir -p /chroot/mysql/dev /chroot/mysql/etc /chroot/mysql/tmp /chroot/mysql/var/tmp /chroot/mysql/usr/local/mysql/libexec /chroot/mysql/usr/local/mysql/share/mysql/english

Now set the correct directory permissions

chown -R root:sys /chroot/mysql
chmod -R 755 /chroot/mysql
chmod 1777 /chroot/mysql/tmp

Once the directories are set up, copy the server's files:

cp /usr/local/mysql/libexec/mysqld /chroot/mysql/usr/local/mysql/libexec/
cp /usr/local/mysql/share/mysql/english/errmsg.sys /chroot/mysql/usr/local/mysql/share/mysql/english/
cp -r /usr/local/mysql/share/mysql/charsets /chroot/mysql/usr/local/mysql/share/mysql/
cp /etc/hosts /chroot/mysql/etc/
cp /etc/host.conf /chroot/mysql/etc/
cp /etc/resolv.conf /chroot/mysql/etc/
cp /etc/group /chroot/mysql/etc/
cp /etc/master.passwd /chroot/mysql/etc/passwords
cp /etc/my.cnf /chroot/mysql/etc/

Finally, copy the mysql databases which contain the grant tables storing the MySQL access privileges:

cp -R /usr/local/mysql/var/ /chroot/mysql/usr/local/mysql/var
chown -R mysql:mysql /chroot/mysql/usr/local/mysql/var

As with Apache, we need to create null device:

mknod /chroot/mysql/dev/null c 2 2
chown root:sys /chroot/mysql/dev/null
chmod 666 /chroot/mysql/dev/null

We need to edit the password and groups files to remove any entries bar the mysql user and group.

mysql:x:12347:12348:MySQL Server:/dev/null:/bin/false


In order for PHP to be able to access MySQL we need to create a link to mysql.sock, ln /chroot/mysql/tmp/mysql.sock /chroot/httpd/tmp/. /chroot/mysql/tmp/mysql.sock and /chroot/httpd/tmp/ need to be on same filesystem. This needs to be done every time we startup the MySQL server (the example startup script below handles this).

To run MySQL in a chrooted environment as a user other than root, we need the chrootuid program. Once we have installed chrootuid, test the server: chrootuid /chroot/mysql mysql /usr/local/mysql/libexec/mysqld &. This will run our server as the mysql user.

The MySQL root User and Default Accounts

The MySQL root user should not be confused with the system root user. By default, the MySQL root user has no password. You can check this with mysql -u root, if you get a mysql prompt, no root password is set. The first thing we should do is set a strong password for this user. Never give the system root password to the MySQL root user.

To set the initial root password open a mysql prompt, mysql -u root mysql, and enter the following:

mysql> UPDATE user SET Password=PASSWORD('new_password')
-> WHERE user='root';

Don't forget to FLUSH PRIVILEGES; to make the privileges effective.

As well as setting the root password, we should remove anonymous accounts:

mysql> DELETE FROM user WHERE User = '';

Alternatively set a password for the anonymous accounts:

mysql> UPDATE user SET Password = PASSWORD('new_password')
-> WHERE User = '';

MySQL Privilege System and MySQL Users

The MySQL privilege system allows for authentication of users connecting from specific hosts. Authenticated users can be assigned privileges such as SELECT, INSERT, UPDATE, DELETE etc on a per database, table, column or host basis. When a user connects, MySQL first checks if that user is authorized to connect, based on the host and supplied password. If the user is allowed to connect, MySQL will then check each statement to see if the user is allowed to perform the requested action.

When creating new MySQL users, always give the user a strong password and never store passwords as plain text. Only allow the minimum amount of privileges for a user to accomplish a task and set those privileges on a per database basis. Some extra time spent planning what privileges to assign to users will go a long way in ensuring the security of your data.

You can create a new user with specific privileges using the GRANT statement. For example:

GRANT USAGE ON myapp.* TO 'someuser'@'localhost' IDENTIFIED BY 'some_pass';

This statement will create a user MySQL named someuser who has access to all tables in the myapp database. The USAGE option sets all of the user's privileges to 'No', meaning you must enable specific privileges later. You may replace USAGE with a list of specific privileges. IDENTIFIED BY 'some_pass' sets the accounts password to 'some_pass', GRANT automatically encrypts the password for you. Finally, this user can only connect from localhost. FLUSH PRIVILEGES; is needed to make privilege changes effective.

MySQL access privileges are stored in the grant tables of the mysql database. You should never grant normal users privileges to edit entries in the mysql database. That right should be reserved for the root user. There are several tables in the mysql database which allow for a fine grained level of control over user privileges.

The user table is the most important of the MySQL grant tables. It contains the username and password for the user as well as the host from which a user can connect. There are are also many fields specifying a wide range of privileges such as SELECT, INSERT, DELETE, FILE, PROCESS. You should examine this table and the MySQL manual yourself to become familiar with all of the options available. Setting a value of 'N' for a field disables the privilege and 'Y' enables it.

You can change privileges using an SQL UPDATE query or the GRANT statement. If you are using SQL statements such as UPDATE or INSERT to update or set user passwords, be sure to use the PASSWORD() function to encrypt the password in the database. Finally, remember to FLUSH PRIVILEGES; for any changes you make to become effective. eg

UPDATE user SET Host='localhost', Password=PASSWORD('new_pass'),
Reload_priv='Y', Process_priv='Y' WHERE

Of the different privileges, most are self-explanatory, however some bear special consideration. The PROCESS or SUPER should never be given to untrusted users. A user with these privileges may run mysqladmin processlist which shows a list of currently executing queries. This list could potentially reveal sensitive data such as passwords.

The FILE should also not be granted lightly. This privilege allows users to read and write files anywhere on the filesystem to which the mysqld process has access.

Privileges which system administrative rights or database administrative rights, such as FILE, GRANT, ALTER, SHOW DATABASE, RELOAD, SHUTDOWN, PROCESS, SUPER, should not generally be given to accounts used by specific applications, especially web based applications. Furthermore, accounts for specific applications should only have access to the databases related to that specific application.

The other tables in the mysql database give an even finer grained level of control over privileges:

db - controls the access of users to specific databases.

tables_priv - controls the access of users to specific tables.

columns_priv - controls the access of users to specific columns of a table.

hosts - specify the actions which can be performed from a particular host.

One final thing to note is that, if you don't completely trust your DNS, use IP numbers in grant tables in place of host names. This makes it more difficult to spoof hosts.

Local Security

There are a number of measures we need to take to improve security on the local machine. Most importantly, never run mysqld as root as, among other risks, any user with the FILE privilege will then be capable of creating files as the root user.

We should also make sure that only the mysql user has read write access to database directory. Data in the database files can easily be viewed with any text editor, therefore any user with read or write access to the files could read or alter data, by-passing MySQL's privileges.

The mysql command history is stored in $HOME/.mysql_history. This may show up sensitive information such as passwords. You should clear the file with echo > $HOME/.mysql_history. To prevent the file being written to in the future, link the .mysql_history files of administrative users to /dev/null, ln -s /dev/null .mysql_history

If you are only using MySQL on the local machine, for example, for PHP web based applications, in /chroot/mysql/etc/my.cnf add the line skip-networking to the [mysqld] section. This will disable all TCP networking features of the MySQL daemon.

We can also disable the use of the LOAD DATA LOCAL INFILE command which allows reading of local files and is potentially dangerous. Add the line set-variable=local-infile=0 to the [mysqld] section of /chroot/mysql/etc/my.cnf.

Finally, add the line socket = /chroot/mysql/tmp/mysql.sock to the [client] section of /etc/my.cnf. Notice that we are adding this line to /etc/my.cnf not /chroot/mysql/etc/my.cnf. This is because, while the MySQL server daemon will use /chroot/mysql/etc/my.cnf, our MySQL administrative programs such as mysqladmin are not in our chroot and will therefore read configuration from /etc/my.cnf.

Securing Remote Access

The most important step in securing remote access to your MySQL server is in having a firewall. Your firewall should only allow trusted hosts access to MySQL's port, 3306. Better still, is to firewall off your MySQL server altogether and only allow access through an SSH tunnel as described below.

Always use passwords for user accounts, even for trusted client programs. The password in a mysql connection is sent encrypted, however, in versions prior to 4.1.1 encryption was not particularly strong. In version 4.1.1 the encryption algorithm was much improved.

Even though the password is sent encrypted, data is sent as plain text. If you are connecting across an untrusted network, you should use an SSH encrypted tunnel. SSH tunneling allows us to connect to a MySQL server from behind a firewall, even when the MySQL port is blocked. To set up tunnel, use the command ssh ssh_server -L 5001:mysql_server:3306 sleep 99999. You need not have direct access to mysql_server, provided ssh_server does. Now you can connect to port 5001 on the local machine with your favorite database client and the connection will be forwarded silently to the remote machine in an encrypted ssh tunnel.


It is important to make regular backups of your databases. MySQL includes two utilities which make this easy, mysqlhotcopy and mysqldump.

To use mysqlhotcopy, a user needs access to the files for the tables that they are backing up, the SELECT privilege for those tables, and the RELOAD privilege (in order to execute FLUSH TABLES). A database can be backed up using mysqlhotcopy db_name [/path/to/backup_db_dir].

mysqldump supports more options and is especially useful for copying databases between servers, backing up multiple databases at once or making backups of the database structure only. Databases can be backed up using one of the following commands:

mysqldump [options] db_name [tables]
mysqldump [options] --databases DB1 [DB2 DB3...]
mysqldump [options] --all-databases

For example, you can back-up all your databases and compress them in one go with the command:

date=`date -I`; mysqldump --opt --all-databases -u user --password="your_pass" | bzip2 -c > databasebackup-$date.sql.bz2

The --opt option is shorthand for --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. This should create a back-up which is quick and easy to restore. In fact this option is enabled by default in versions 4.1 and later, you can disable it with --skip-opt.

To restore a database from a file created by mysqldump you just need mysql -u user -p db_name < backup-file.sql. The -p option will have mysql prompt for a password. Server Startup The following script can be used for starting your MySQL server.

#!/bin/sh CHROOT_MYSQL=/chroot/mysql CHROOT_PHP=/chroot/httpd SOCKET=/tmp/mysql.sock MYSQLD=/usr/local/mysql/libexec/mysqld PIDFILE=/usr/local/mysql/var/`hostname`.pid CHROOTUID=/usr/local/sbin/chrootuid echo -n " mysql" case "$1" in start) rm -rf ${CHROOT_PHP}/${SOCKET} nohup ${CHROOTUID} ${CHROOT_MYSQL} mysql ${MYSQLD} >/dev/null 2>&1 &
kill `cat ${CHROOT_MYSQL}/${PIDFILE}`
echo ""
echo "Usage: `basename $0` {start|stop}" >&2
exit 64
exit 0


The procedures we have seen will reduce the risk of a potential break in to our database server. MySQL's extensive privilege system allows us to protect the data stored within our database. As always we should remain diligent, and be sure to apply patches and upgrades to our server as and when they become available.

Wednesday, December 9, 2009

Application Server HOW TO

Using Apache with mod_proxy

This page describes how to integrate Confluence into an Apache website, using mod_proxy. There are some common situations where you might do this:

* You have an existing Apache-based website, and want to add Confluence to the mix (eg.
* You have two or more Java applications, each running in their own application server on different ports, eg. http://localhost:8080/confluence and http://localhost:8081/jira. By setting up Apache with mod_proxy, you can have both available on the regular HTTP port (80), eg. at and If you are running JIRA and Confluence, we recommend this setup. It allows each app to be restarted, managed and debugged separately.

This page describes how to configure mod_proxy. We describe two options:

* If you want a URL like, go to the simple configuration.
* If you want a URL like, go to the complex configuration.

Simple configuration
Set the context path

First, set your Confluence application path (the part after hostname and port) correctly. Say you want Confluence available at, and you currently have it running at http://localhost:8080/. The first step is to get Confluence available at http://localhost:8080/confluence/.

To do this in Tomcat (bundled with Confluence), edit conf/server.xml, locate the "Context" definition:

and change it to:

Then restart Confluence, and ensure you can access it at http://localhost:8080/confluence/
Configure mod_proxy

Now enable mod_proxy in Apache, and proxy requests to the application server by adding the example below to your Apache httpd.conf (note: the files may be different on your system; the JIRA docs describe the process for Ubuntu/Debian layout):
01.# Put this after the other LoadModule directives
02.LoadModule proxy_module /usr/lib/apache2/modules/
03.LoadModule proxy_http_module /usr/lib/apache2/modules/
05.# Put this in the main section of your configuration (or desired virtual host, if using Apache virtual hosts)
06.ProxyRequests Off
07.ProxyPreserveHost On
10. Order deny,allow
11. Allow from all

14.ProxyPass /confluence http://localhost:8080/confluence
15.ProxyPassReverse /confluence http://localhost:8080/confluence
17. Order allow,deny
18. Allow from all

Note to Windows Users

It is recommended that you specify the absolute path to the and files.
Set the URL for redirection

You will need to modify the server.xml file in your tomcat's conf directory and set the URL for redirection.

Locate this code segment

And append the following segment:

Replace with the URL you wish to be redirected to.

Complex configuration

A complex configuration involves using the mod_proxy_html filter to modify the proxied content en-route. This is required if the Confluence path differs between Apache and the application server. For example:
Externally accessible (Apache) URL
Application server URL

Notice that the application path in the URL is different in each. On Apache, the path is /, and on the application server the path is /confluence.
For this configuration, you need to install the mod_proxy_html module, which is not included in the standard Apache distribution.

Alternative solutions are discussed below.
01.# Put this after the other LoadModule directives
02.LoadModule proxy_module modules/
03.LoadModule proxy_http_module modules/
04.LoadModule proxy_html_module modules/
07. ServerName
09. # Put this in the main section of your configuration (or desired virtual host, if using Apache virtual hosts)
10. ProxyRequests Off
11. ProxyPreserveHost On
14. Order deny,allow
15. Allow from all

18. ProxyPass /
19. ProxyPassReverse /
21. ProxyHTMLURLMap / /confluence/
24. Order allow,deny
25. Allow from all


The ProxyHTMLURLMap configuration can become more complex if you have multiple applications running under this configuration. The mapping should also be placed in a Location block if the web server URL is a subdirectory and not on a virtual host. The Apache Week tutorial has more information how to do this.
More information

* The mod_proxy_html site has documentation and examples on the use of this module in the complex configuration.
* Apache Week has a tutorial that deals with a complex situation involving two applications and ProxyHTMLURLMap.
* Using Apache with virtual hosts and mod_proxy shows how to configure the special case where you want JIRA and Confluence running on separate application servers on virtual host subdomains.


If Tomcat is your application server, you have two options:

* use mod_jk to send the requests to Tomcat
* use Tomcat's virtual hosts to make your Confluence application directory the same on the app server and the web server, removing the need for the URL mapping.

If your application server has an AJP connector, you can:

* use mod_jk to send the requests to your application server.

Tuesday, December 1, 2009

Troubleshooting Linux networking||Module||Driver problems

Network-related problems on your Linux machine can be hard to resolve because they go beyond the trusted environment of your Linux box. But, as a Linux administrator, you can help your network administrator by applying the right technologies. In this article you'll learn how to troubleshoot network related driver problems.
It's easy to determine that a problem you're encountering is a network problem -- if your computer can't communicate with other computers, something is wrong on the network. But, it may be harder to find the source of the problem. You need to begin by analyzing the chain of elements involved in network communication.
If your host needs to communicate with another host in the network, the following conditions need to be met:
  1. The network card is installed and available in the operating system, i.e., the correct driver is loaded.
  2. The network card has an IP address assigned to it.
  3. The computer can communicate with other hosts in the same network.
  4. The computer can communicate with other hosts in other networks.
  5. The computer can communicate with other hosts using their host names
Troubleshooting network driver issues
To communicate with other computers on the network, your computer needs a network interface. The method your computer uses to obtain such a network interface is well designed. During the system boot, the kernel probes the different interfaces that are available and typically on the PCI bus, finds a network card. Next, it determines which driver is needed to address the network card and if the driver is available, it will address the network card. Following that, the udev daemon (udevd) is started in the initial boot phase of your computer and it creates the network device for you. In a simple computer with one network interface only, this will typically be the eth0 device but as you will read later, other interfaces can also be used. Once the interface has been loaded, the next stage can be passed in which the network card gets an IP address.
As was just discussed, there are some items involved to load the driver for the network card correctly.
  1. The kernel probes the PCI bus.
  2. Based on the information it finds on the PCI bus, a driver is loaded.
  3. Udev creates the network interface which you need to actually use the network interface.
To fix network card problems, begin by determining if the network card was really found on the PCI-bus. To do that, use the lspci command. Here is an example output of lspci:
JBO:~ # lspci 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) 00:07.7 System peripheral: VMware Inc Virtual Machine Communication Interface (rev 10) 00:0f.0 VGA compatible controller: VMware Inc Abstract SVGA II Adapter 00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01) 02:00.0 USB Controller: Intel Corporation 82371AB/EB/MB PIIX4 USB 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10) 02:02.0 Multimedia audio controller: Ensoniq ES1371 [AudioPCI-97] (rev 02) 02:03.0 USB Controller: VMware Inc Abstract USB2 EHCI Controller JBO:~ #
Here, at PCI address 02:01.0 an Ethernet network card is found. The network card is an AMD 79c970 and (between square brackets) the PCnet32 kernel module is needed to address this network card.
The next step is to check the hardware configuration as reflected in the /sys tree. Every PCI device has it's configuration stored in there, and for the network card in this example, it is stored in the directory /sys/bus/pci/devices/0000:02:01.0, which reflects the address of the device on the PCI bus. Here is an example of the contents of this directory:
JBO:/sys/bus/pci/devices/0000:02:01.0 # ls -l total 0 -rw-r--r-- 1 root root 4096 Oct 18 07:08 broken_parity_status -r--r--r-- 1 root root 4096 Oct 17 07:50 class -rw-r--r-- 1 root root 256 Oct 17 07:50 config -r--r--r-- 1 root root 4096 Oct 17 07:50 device lrwxrwxrwx 1 root root 0 Oct 17 07:51 driver -> ../../../../bus/pci/drivers/pcnet32 -rw------- 1 root root 4096 Oct 18 07:08 enable lrwxrwxrwx 1 root root 0 Oct 18 07:08 firmware_node -> ../../../LNXSYSTM:00/device:00/PNP0A03:00/device:06/device:08 -r--r--r-- 1 root root 4096 Oct 17 07:50 irq -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpulist -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpus -r--r--r-- 1 root root 4096 Oct 17 07:53 modalias -rw-r--r-- 1 root root 4096 Oct 18 07:08 msi_bus drwxr-xr-x 3 root root 0 Oct 17 07:50 net -r--r--r-- 1 root root 4096 Oct 18 07:08 numa_node drwxr-xr-x 2 root root 0 Oct 18 07:08 power -r--r--r-- 1 root root 4096 Oct 17 07:50 resource -rw------- 1 root root 128 Oct 18 07:08 resource0 -r-------- 1 root root 65536 Oct 18 07:08 rom lrwxrwxrwx 1 root root 0 Oct 17 07:50 subsystem -> ../../../../bus/pci -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_device -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_vendor -rw-r--r-- 1 root root 4096 Oct 17 07:51 uevent -r--r--r-- 1 root root 4096 Oct 17 07:50 vendor JBO:/sys/bus/pci/devices/0000:02:01.0 #
The most interesting item for troubleshooting is the symbolic link to the driver directory. In this example it points to the pcnet32 driver and using the information that lspci provided, we know this is the correct driver.
In most cases, the driver that Linux installs will work fine. In some cases it doesn't. When configuring a Dell server with a Broadcom network card, I have seen severe problems, where a ping command that used a jumbo frame packet was capable of causing kernel panic. One of the first things to suspect in that case, is the same kernel driver for the network card. A nice troubleshooting approach is to start by finding out which version of the driver you are using. You can accomplish this by using the modinfo command on the driver itself. Here is an example of modinfo on the pcnet32 driver:
JBO:/ # modinfo pcnet32 filename: /lib/modules/ license: GPL description: Driver for PCnet32 and PCnetPCI based ethercards author: Thomas Bogendoerfer srcversion: 261B01C36AC94382ED8D984 alias: pci:v00001023d00002000sv*sd*bc02sc00i* alias: pci:v00001022d00002000sv*sd*bc*sc*i* alias: pci:v00001022d00002001sv*sd*bc*sc*i* depends: mii supported: yes vermagic: SMP mod_unload modversions 586 parm: debug:pcnet32 debug level (int) parm: max_interrupt_work:pcnet32 maximum events handled per interrupt (int) parm: rx_copybreak:pcnet32 copy breakpoint for copy-only-tiny-frames (int) parm: tx_start_pt:pcnet32 transmit start point (0-3) (int) parm: pcnet32vlb:pcnet32 Vesa local bus (VLB) support (0/1) (int) parm: options:pcnet32 initial option setting(s) (0-15) (array of int) parm: full_duplex:pcnet32 full duplex setting(s) (1) (array of int) parm: homepna:pcnet32 mode for 79C978 cards (1 for HomePNA, 0 for Ethernet, default Ethernet (array of int)
The modinfo command will give you different useful information for each module. If a version number is included, check for available updated versions and download and install them.
When working with some hardware, you should also check what kind of module is used. If the module is open source, in general it's fine as open source modules are thoroughly checked by the Linux community. If the module is proprietary, there may be incompatibilities between the kernel and the particular module. If this is the case, your kernel is flagged as "tainted." A tainted kernel is a kernel that has some modules loaded that are not controlled by the Linux kernel community. To find out if this is the case on your system, you can check the contents of the /proc/sys/kernel/tainted file. If this file has a 0 as its contents, no proprietary modules are loaded. If it has a 1, proprietary modules are loaded and you may be able to fix the situation if you replace the proprietary module with an open source module.
The information in this article should help you in fixing driver related issues.

Wednesday, November 4, 2009

Sendmail Command Line Tips and Tricks

# mailq
Prints the mail queue's contents, same as /usr/lib/sendmail –bp
# newaliases 
Rebuilds the aliases database file, same as /usr/lib/sendmail –bi  
# hoststat 
Prints persistent host status info, same as /usr/lib/sendmail -bh  
# purgestat
Purges (zeroes) persistent host status info, same as /usr/lib/sendmail -bH  
# smtpd
Runs in daemon mode, same as /usr/lib/sendmail –bd –q30  
# mailq –OmaxQueueRunSize=1
Quickly print the total number of messages within mail queue
# /usr/lib/sendmail –q –Otimeout.queuereturn=99d 
Purges the mail queue without timing out any messages. Useful if the mail server has been down longer than the queuereturn value set in the cf.
# /usr/lib/sendmail –bv foolist | grep –v deliverable 
Prints only undeliverable addresses from in the mail list foolist. Great for use in a shell script to remove badd addresses from a mailing list.

Command Line Switches  

-B 7bit 
Causes sendmail to clear the high-bit of every incoming byte.
-B 8bitmime 
Causes sendmail to preserve the high-bit or every incoming byte.
Uses ARPAnet/Grey-Book protocols to transfer mail.
Runs as daemon, like –bd, but does not fork and does not detach from controlling terminal.
Runs as daemon, forks and detaches.
Purges (zeroes) persistent host status info.
Prints persistent host status info.
Initializes the aliases database.
Causes sendmail to read and send message (this is the default)
Prints the contents of the mail queue.
Runs sendmail on standard I/O.  
Runs sendmail in rule testing mode.
Verifies address.
-C /tmp/
Uses as its configuration file.
Sets HoldExpensive option to true.

set debug mode.
  • -d0 – Shows general config
  • -d0.1 – Prints version
  • -d.04 – Prints local hostname and any aliases for it.
  • -d0.15 – Prints the list of delivery agents declared
  • -d0.20 – Prints address of each network interface
  • -d8 – Traces most DNS lookups
  • -d8.1 – Prints failure of low level MX searches.
  • -d8.2 – Prints calls to getcanonname
  • -d8.3 - Traces dropped local hostnames
  • -d8.5 – Shows hostnames tried in getcanonname
  • -d8.8 – Shows when MX lookups return the wrong type.
  • -d11 – Traces delivery agent calls
  • -d11.1 – Traces arguments passed to the delivery agent
  • -d11.2 - Prints the user ID that the delivery agent is invoked as
  • -d21 – Traces rewriting of addresses
  • -d21.1- Traces general ruleset rewriting
  • -d21.2 – Traces use of $& macro
  • -d21.3 – Shows $> subroutines called
  • -d21.4 – Displays result of rewrite
  • -d21.15 – Shows $digit replacement
  • -d21.35 – shows token by token LHS matching
  • -d27 – Traces aliasing
  • -d27.1 – Traces general aliasing
  • -d27.2 – Traces :include: files, alias self-references, and errors on home
  • -d27.3 – Traces the ~/.forward path and the alias wait
  • -d27.4 – Prints "not safe" when a file is unsafe to trust
  • -d27.9 – Shows uid/gid changes when reading :include: files
  • -d35 – Traces macros
  • -d35.9 shows macro values as they are defined
  • -d35.14 – Shows macro names being converted to integer id’s
  • -d35.24 – Shows macro expansion
  • -d37 – Traces options and class macros
  • -d37.1 – Traces the setting of options
  • -d37.8 – Traces the adding of words to a class
  • -d41 – Traces the queue
  • -d41.1 – Traces queue ordering
  • -d41.2 – Shows failure to open qf files
  • -d41.49 – Shows skipped queue files
  • -d41.50 – Show every file in queue
Set senders full name
Set senders address
Set minimum hop count
Set IgnoreDots option to true
Set macro
Set return DNS notify information
  • never – Never return the info
  • success – Return on successful delivery
  • failure – Return on failure
  • delay – Return on delayed delivery
Supresses aliasing  
Set an option (long name)
Set an option (short name)
Sets protocol in $r macro to UUCP and $s macro to test  
Sets queue processing to every 30 min 
Processes the queue once delivering only mail to 
-R hdrs
bounces only the headers  
-R full
Bounces headers and body
Sets SaveFromLine option to true
-T 5d
Sets Timeout.queuereturn option to 5 days  
Gathers a list of recipients from messages headers  
make this the initial MUA to MTA submission  
-V test123456 
Sets the DSN ENVID string to test123456  
Runs sendmail in verbose mode  
-X /var/tmp/trace.mail
Logs both sides of smtp transactions to trace.mail file.  

Rule Testing Mode (/usr/lib/sendmail –bt)

Prints help .
Defines macro r as UUCP
Prints the contents of ruleset 5
Displays list of delivery agents
Prints the value of macro name.
prints the contents of the class macro w
Returns the MX records for in the order they will be utilized  
/parse foo
Parses the value of the address foo, returns the value of crackaddr(), and the final parsed address including the delivery agent.
/try local foo
Rewrites the address foo based on the rules for local delivery
/tryflags HS 
Sets the flags used by /parse and /try to H for header and S for sender, can also use E for envelope and R for recipient
/canon foo
Transforms the hostname foo into its canonical form  
/map aliases foo
Looks up foo in the aliases database  
3,0 – me@foo 
Runs the address me@foo through rulesets 3 and 0

Linux Securirty Notes 15: IPTables 8: DMZ

IPTables with DMZ
Let consider the interface to setup/understand the DMZ.
  • eth0: external interface (
  • eth1: Internal Interface (
  • eth2: The DMZ zone (

Step 1:
Create DNAT for all the servers in the DMZ zone (eth2) for accessing the service externally
# iptables -t nat -A PREROUTING -d -p tcp --dport 80 -j DNAT --to-destination
# iptables -t nat -A PREROUTING -d -p tcp --dport 443 -j DNAT --to-destination
If any request comes to firewall with the destination IP as and port as 80 will be DNATed to in DMZone.
Now test accessing the service in DMZone from Internel as well externel network. From both the network we will be able to access the server in the DMZone using the IP

Configure the split DNS or 2 DNS systems (Inside&Outside of the DMZone).
Setup rule for trusted network from the outside network(Internet) for the traffic which will allow system access (SSH).
# iptables -A FORWARD -s -j ACCEPT
# iptables -A FORWARD -s -m state --state ESTABLISHED -j ACCEPT
# iptables -P FORWARD DROP
This will deny all access to the DMZone from the internet hosts, only allows the Internal network. Because the default policy of FORWARD chain is set to drop, we need to create the "state match" for the hosts in the DMZone(This will deny sourcing a new connection from the DMZone, only established connection will be permitted).

Dual DMZ Configuration
This is the way of segmenting the servers to separate DMZones.
Let consider the interface to setup/understand the Dual DMZ.
  • eth0: externel interface (
  • eth1: Internel Interface (
  • eth2: The DMZ1 zone ( (Web servers)
  • eth3: The DMZ2 zone ( (DBMS, App servers like JBOSS, TOMCAT etc)
Using this method we will be able to control the traffic from one DMZone to another. This is used for the scenarios of Application servers which need to contact the DB Servers located on separate server.

Here we have to permit only the DMZ1 to contact the DMZ2. all other traffic will be denied.So the servers in the DMZ2 zone will be more secured.
# iptables -t nat -A FORWARD -s -d -j ACCEPT
# iptables -t nat -A FORWARD -m state --state ESTABLISED -s -j ACCEPT
# iptables -t nat -P FORWARD DROP
This will make only the DMZ1 to contact the DMZ2. And from DMZ2 only the established connection will be permitted. All other request will be dropped in the FORWARD chain.
These rules are the basic backbone for setting up the routing and Natting in DMZone. All other rules should be defined according to our network need.

Tuesday, November 3, 2009

Linux Securirty Notes 15: IPTables 7: NAT

IPTables NAT
    Network Address Translation is the feature that makes Linux based firewall mostly in use. NAT is commonly used to masquerade the IP address

    The NAT table contains 3 chains
    The DNAT is defined in the PREROUTING chain. Using this we will make available of our internal service to external (Internet).i.e, from internet to lan (changes the packets before it routes to lan)
    This is responsible for MASQUERADE (dynamic SNAT) & SNAT. When packet needs to leave from one subnet(internel) through the linux firewall to another it traverse through POSTROUTING chain. (Changes the packet after it leaves the route from lan). eg:- MASQUERADE option is used in certain cases like, if ISP provides the DHCP address and the internel LAN needs to brows, then we have to masquerade all the request from the lan to the DHCP address provided by isp
    Locally sourced/generated packets are subjected to NAT. Eg:- If the firewall has more than one IP address using this chain we can re-write the packets going out from this linux machine to a single IP.

    3 types of NATing is used.
  • masquerade
  • snat
  • dnat
        This feature of NAT is used to dynamically masquerade all the internal address to the external IP

The following example will masquerade all the outgoing traffic to the externel bound IP of the firewall.

# iptables -t nat -A POSTROUTING -j MASQUERADE

Another example that masquerades all the traffic from network

# iptables -t nat -A POSTROUTING -j MASQUERADE -s

    This will masquerade all the request from the subnet to the external ip of the firewall.
Test by enabling logging for nat and check the log file.

Masquerading Port:

#iptables -A POSTROUTING -t nat -p tcp -j MASQUERADE --to-ports 1024-10240

    This will masquerade all the ports to the range from 1024 to 10240. So when a external client makes connection to the internal server (for eg:- # telnet 22) then the port allocated to the client will be in between 1024 to 10240. As a result the internal system will be only able to source the port in range of 1024 to 10240.


        This feature of NAT is used to masquerade a particular internal ip adress to a given external address. Though SNAT and masquerading perform the same fundamental function, mapping one address space into another one, the details differ slightly. Most noticeably, masquerading chooses the source IP address for the outbound packet from the IP bound to the interface through which the packet will exit. i.e, SNAT permits 1-to-1 and/or 1-to-many mappings. It is used when we have a static public IP address.

This example will masquerade all the outgoing traffic from the subnet 10.0.0./8 to the ip

# iptables -t nat -A POSTROUTING -s -j SNAT --to-source


# iptables -t nat -A POSTROUTING -j SNAT -s --to-source

SNAT using multiple address:

# iptables -A POSTROUTING -p tcp -s  -j SNAT --to-source
# iptables -A POSTROUTING -p tcp -s -j SNAT --to-source

    The first rule will nat all the traffic from source to, and second rule states that all other traffic from the subnet should be NATed to
Test the functionality by enabling the LOG and use # netstat -ant

    This feature of NAT is used to translate the packet coming to a perticular destination.Destination NAT with netfilter is commonly used to publish or make available of a internal network service to a publicly accessible IP. The connection tracking mechanism of netfilter will ensure that subsequent packets exchanged in either direction (which can be identified as part of the existing DNAT connection) are also transformed.

In this following example, all packets arriving on the router with a destination of will depart from the router with a destination of

# iptables -t nat -A PREROUTING -d -j DNAT --to-destination

 Make the internal mail server available for external access.

# iptables -A PREROUTING -t nat -d -p tcp --dport 25 -j DNAT --to-destination
# iptables -A PREROUTING -t nat -d -p tcp --dport 110 -j DNAT --to-destination

    Here if any request comes to the ip of with the destination port of 25 or 110, then IPTables will redirect (nat) to the internel address of

Netmap TAGRGET in NAT:

    It is implemented in NAT table PREROUTING Chain. This is used to translate the one to one address from one subnet to another subnet.
For Eg:-
Consider we have one subnet and we need to translate all the ip in this subnet equalent to

# iptables -A PREROUTING -t nat -s -j NETMAP --to

    This will convert/rewrite all the packets coming from the subnet to
i.e, the request from the ip will be masked as