Showing posts with label SysAdminUtilities. Show all posts
Showing posts with label SysAdminUtilities. Show all posts

Monday, April 23, 2012

SWAP Space in Linux

Found a very useful topic on Linux.com about swap space on Linux. Copying here.

Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
Swapping is necessary for two important reasons. First, when the system requires more memory than is physically available, the kernel swaps out less used pages and gives memory to the current application (process) that needs the memory immediately. Second, a significant number of the pages used by an application during its startup phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache.
However, swapping does have a downside. Compared to memory, disks are very slow. Memory speeds can be measured in nanoseconds, while disks are measured in milliseconds, so accessing the disk can be tens of thousands times slower than accessing physical memory. The more swapping that occurs, the slower your system will be. Sometimes excessive swapping or thrashing occurs where a page is swapped out and then very soon swapped in and then swapped out again and so on. In such situations the system is struggling to find free memory and keep applications running at the same time. In this case only adding more RAM will help.
Linux has two forms of swap space: the swap partition and the swap file. The swap partition is an independent section of the hard disk used solely for swapping; no other files can reside there. The swap file is a special file in the filesystem that resides amongst your system and data files.
To see what swap space you have, use the command swapon -s. The output will look something like this:
Filename  Type       Size       Used Priority
/dev/sda5 partition  859436  0       -1
Each line lists a separate swap space being used by the system. Here, the 'Type' field indicates that this swap space is a partition rather than a file, and from 'Filename' we see that it is on the disk sda5. The 'Size' is listed in kilobytes, and the 'Used' field tells us how many kilobytes of swap space has been used (in this case none). 'Priority' tells Linux which swap space to use first. One great thing about the Linux swapping subsystem is that if you mount two (or more) swap spaces (preferably on two different devices) with the same priority, Linux will interleave its swapping activity between them, which can greatly increase swapping performance.
To add an extra swap partition to your system, you first need to prepare it. Step one is to ensure that the partition is marked as a swap partition and step two is to make the swap filesystem. To check that the partition is marked for swap, run as root:
fdisk -l /dev/hdb
Replace /dev/hdb with the device of the hard disk on your system with the swap partition on it. You should see output that looks like this:
Device Boot    Start      End           Blocks  Id      System
/dev/hdb1       2328    2434    859446  82      Linux swap / Solaris
If the partition isn't marked as swap you will need to alter it by running fdisk and using the 't' menu option. Be careful when working with partitions -- you don't want to delete important partitions by mistake or change the id of your system partition to swap by mistake. All data on a swap partition will be lost, so double-check every change you make. Also note that Solaris uses the same ID as Linux swap space for its partitions, so be careful not to kill your Solaris partitions by mistake.
Once a partition is marked as swap, you need to prepare it using the mkswap (make swap) command as root:
mkswap /dev/hdb1
If you see no errors, your swap space is ready to use. To activate it immediately, type:
swapon /dev/hdb1
You can verify that it is being used by running swapon -s. To mount the swap space automatically at boot time, you must add an entry to the /etc/fstab file, which contains a list of filesystems and swap spaces that need to be mounted at boot up. The format of each line is:
Since swap space is a special type of filesystem, many of these parameters aren't applicable. For swap space, add:
/dev/hdb1       none    swap    sw      0       0
where /dev/hdb1 is the swap partition. It doesn't have a specific mount point, hence none. It is of type swap with options of sw, and the last two parameters aren't used so they are entered as 0.
To check that your swap space is being automatically mounted without having to reboot, you can run the swapoff -a command (which turns off all swap spaces) and then swapon -a (which mounts all swap spaces listed in the /etc/fstab file) and then check it with swapon -s.

Swap file

As well as the swap partition, Linux also supports a swap file that you can create, prepare, and mount in a fashion similar to that of a swap partition. The advantage of swap files is that you don't need to find an empty partition or repartition a disk to add additional swap space.
To create a swap file, use the dd command to create an empty file. To create a 1GB file, type:
dd if=/dev/zero of=/swapfile bs=1024 count=1048576
/swapfile is the name of the swap file, and the count of 1048576 is the size in kilobytes (i.e. 1GB).
Prepare the swap file using mkswap just as you would a partition, but this time use the name of the swap file:
mkswap /swapfile
And similarly, mount it using the swapon command: swapon /swapfile.
The /etc/fstab entry for a swap file would look like this:
/swapfile       none    swap    sw      0       0

How big should my swap space be?

It is possible to run a Linux system without a swap space, and the system will run well if you have a large amount of memory -- but if you run out of physical memory then the system will crash, as it has nothing else it can do, so it is advisable to have a swap space, especially since disk space is relatively cheap.
The key question is how much? Older versions of Unix-type operating systems (such as Sun OS and Ultrix) demanded a swap space of two to three times that of physical memory. Modern implementations (such as Linux) don't require that much, but they can use it if you configure it. A rule of thumb is as follows: 1) for a desktop system, use a swap space of double system memory, as it will allow you to run a large number of applications (many of which may will be idle and easily swapped), making more RAM available for the active applications; 2) for a server, have a smaller amount of swap available (say half of physical memory) so that you have some flexibility for swapping when needed, but monitor the amount of swap space used and upgrade your RAM if necessary; 3) for older desktop machines (with say only 128MB), use as much swap space as you can spare, even up to 1GB.
The Linux 2.6 kernel added a new kernel parameter called swappiness to let administrators tweak the way Linux swaps. It is a number from 0 to 100. In essence, higher values lead to more pages being swapped, and lower values lead to more applications being kept in memory, even if they are idle. Kernel maintainer Andrew Morton has said that he runs his desktop machines with a swappiness of 100, stating that "My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful."
One downside to Morton's idea is that if memory is swapped out too quickly then application response time drops, because when the application's window is clicked the system has to swap the application back into memory, which will make it feel slow.
The default value for swappiness is 60. You can alter it temporarily (until you next reboot) by typing as root:
echo 50 > /proc/sys/vm/swappiness
If you want to alter it permanently then you need to change the vm.swappiness parameter in the /etc/sysctl.conf file.

Conclusion

Managing swap space is an essential aspect of system administration. With good planning and proper use swapping can provide many benefits. Don't be afraid to experiment, and always monitor your system to ensure you are getting the results you need

Thanks to : Linux.com 

Wednesday, December 14, 2011

New features of yum in RHEL-6.1 now that it's released


A few things you might not know about RHEL-6.1+ yum

  • Search is more user friendly

    As we maintain yum we are always looking for the "minor" changes that can make a big difference to the user, and this is probably one of the biggest minor changes. As of late RHEL-5 and RHEL-6.0 "yum search" was great for finding obscure things that you knew something about but with 6.1 we've hopefully made it useful for finding the "everyday" packages you can't remember the exact name of. We did this by excluding a lot of the "extra" hits, when you get a large search result. For instance "yum search kvm manager" is pretty useless in RHEL-6.0, but in RHEL-6.1 you should find what you want very quickly.
    Example commands:

    yum search kvm manager
    yum search python url
    
  • The updateinfo command The "yum-security" or "yum-plugin-security" package has been around since early RHEL-5, but the RHEL-6.1 update has introduced the "updateinfo" command to make things a little easier to use, and you can now easily view installed security errata (to more easily make sure you are secure). We've also added a few new pieces of data to the RHEL updateinfo data. Probably the most significant is that as well as errata being marked "security" or not they are now tagged with their "severity". So you can automatically apply only "critical" security updates, for example.
Example commands:

yum updateinfo list security all
yum update-minimal --sec-severity=critical


The versionlock command As with the previous point we've had "yum-plugin-version" for a long time, but now we've made it easier to use and put all it's functions under a single "versionlock" sub-command. You can now also "exclude" specific versions you don't want, instead of locking to known good specific ones you had tested.
Example commands:

# Lock to the version of yum currently installed.
yum versionlock add yum
# Opposite, disallow versions of yum currently available:
yum versionlock exclude yum
yum versionlock list
yum versionlock delete yum\*
yum versionlock clear
# This will show how many "excluded" packages are in each repo.
yum repolist -x .


Manage your own .repo variables This is actually available in RHEL-6.0, but given that almost nobody knows about it I thought I'd share it here. You can put files in "/etc/yum/vars" and then use the names of those files are variables in any yum configuration, just like $basearch or $releasever. There is also a special $uuid variable, so you can track individual machines if you want to.

yum has it's own DB
Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable addition is that we now store the "installed_by" and "changed_by" attributes, this could be worked out from "yum history" before, but now it's easily available directly from the installed package.
  • Example commands:
    yumdb 
    yumdb info yum 
    yumdb set installonly keep kernel-2.6.32-71.7.1.el6 
    yumdb sync
  • Additional data in "yum history" Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable additions are that we now store the command line and we store a "transaction file" that you can use on other machines.
    Example commands:

    yum history
    yum history pkgs yum
    yum history summary
    
    yum history undo last
    
    yum history addon-info 1    config-main
    yum history addon-info last saved_tx
    
    "yum install" is now fully kickstart compatible As of RHEL-6.0 there was one thing you could do in a kickstart package list that you couldn't do in "yum install" and that was to "remove" packages with "-package". As of the RHEL-6.1 yum you can do that, and we also added that functionality to upgrade/downgrade/remove. Apart from anything else, this should make it very easy to turn the kickstart package list into "yum shell" files (which can even be run in kickstart's %post).
    Example commands:

     yum install 'config(postfix) >= 2.7.0'
     yum install MTA
     yum install '/usr/kerberos/sbin/*'
     yum -- install @books -javanotes
    
    Easier to change yum configuration We tended to get a lot of feature requests for a plugin to add a command line option so the user could change a single yum.conf variable, and we had to evaluate those requests for general distribution based on how much we thought all users would want/need them. With the RHEL-6.1 yum we created the --setopt so that any option can be changed easily, without having to create a specific bit of code. There were also some updates to the yum-config-manager command.
    Example commands:
    yum --setopt=alwaysprompt=false upgrade yum yum-config-manager yum-config-manager --enable myrepo yum-config-manager --add-repo https://example.com/myrepo.repo
    Working towards managing 10 machines easily yum is the best way to manage a single machine, but it isn't quite as good at managing 10 identical machines. While the RHEL-6.1 yum still isn't great at this, we've made a few improvements that should help significantly. The biggest is probably the "load-ts" command, and the infrastructure around it, which allows you to easily create a transaction on one machine, test it, and then "deploy" it to a number of other machines. This is done with checking on the yum side that the machines started from the same place (via. rpmdb versions), so that you know you are doing the same operation.
    Also worth noting is that we have added a plugin hook to the "package verify" operation, allowing things like "puppet" to hook into the verification process. A prototype of what that should allow those kinds of tools to do was written by Seth Vidal here.
    Example commands:

    # Find the current rpmdb version for this machine (available in RHEL-6.0)
    yum version nogroups
    # Completely re-image a machine, or dump it's "package image"
    yum-debug-dump
    yum-debug-restore 
        --install-latest
        --ignore-arch
        --filter-types=install,remove,update,downgrade
    
    # This is the easiest way to get a transaction file without modifying the rpmdb
    echo | yum update blah
    ls ${TMPDIR:-/tmp}/yum_save_tx-* | sort | tail -1
    
    # You can now load a transaction and/or see the previous transaction from the history
    yum load-ts /tmp/yum_save_tx-2011-01-17-01-00ToIFXK.yumtx
    yum -q history addon-info last saved_tx > my-yum-saved-tx.yumtx

    
    
    

    Tuesday, November 22, 2011

    Linux filtering and transforming text - Command Line Reference


    View defined directives in a config file:


    grep . -v '^#' /etc/vsftpd/vsftpd.conf


    View a line matching “Initializing CPU” and 5 lines immediately after this match using 'grep' and 'sed'


    grep -A 5 "Initializing CPU#1" dmesg
    sed -n 101,110p /var/log/cron - Displays from Line 101 to 110 of the log file


    Exclude the empty lines:

    grep -v '^#' /etc/vsftpd/vsftpd.conf | grep .
    grep -v '^#' /etc/ssh/sshd_config | sed -e /^$/d
    grep -v '^#' /etc/ssh/sshd_config | awk /./{print}

    More examples of GREP :

    grep smug *.txt {search *.txt files for 'smug'}
    grep BOB tmpfile
    {search 'tmpfile' for 'BOB' anywhere in a line}
    grep -i -w blkptr *
    {search files in CWD for word blkptr, any case}
    grep run[- ]time *.txt
    {find 'run time' or 'run-time' in all txt files}
    who | grep root
    {pipe who to grep, look for root}
    grep smug files
    {search files for lines with 'smug'}
    grep '^smug' files
    {'smug' at the start of a line}
    grep 'smug files
    {'smug' at the end of a line}
    grep '^smug files
    {lines containing only 'smug'}
    grep '\^s' files
    {lines starting with '^s', "\" escapes the ^}
    grep '[Ss]mug' files
    {search for 'Smug' or 'smug'}
    grep 'B[oO][bB]' files 
    {search for BOB, Bob, BOb or BoB }
    grep '^ files
    {search for blank lines}
    grep '[0-9][0-9]' file
    {search for pairs of numeric digits}grep '^From: ' /usr/mail/$USER {list your mail}
    grep '[a-zA-Z]'
    {any line with at least one letter}
    grep '[^a-zA-Z0-9]
    {anything not a letter or number}
    grep '[0-9]\{3\}-[0-9]\{4\}'
    {999-9999, like phone numbers}
    grep '^.
    {lines with exactly one character}
    grep '"smug"'
    {'smug' within double quotes}
    grep '"*smug"*'
    {'smug', with or without quotes}
    grep '^\.'
    {any line that starts with a Period "."}
    grep '^\.[a-z][a-z]'
    {line start with "." and 2 lc letters}


    Grep command symbols used to search files:

    ^ (Caret) = match expression at the start of a line, as in ^A.
    $ (Question) = match expression at the end of a line, as in A$.
    \ (Back Slash) = turn off the special meaning of the next character, as in \^.
    [ ] (Brackets) = match any one of the enclosed characters, as in [aeiou].
    Use Hyphen "-" for a range, as in [0-9].
    [^ ] = match any one character except those enclosed in [ ], as in [^0-9].
    . (Period) = match a single character of any value, except end of line.
    * (Asterisk) = match zero or more of the preceding character or expression.
    \{x,y\} = match x to y occurrences of the preceding.
    \{x\} = match exactly x occurrences of the preceding.
    \{x,\} = match x or more occurrences of the preceding.

    Monday, October 24, 2011

    How To Disable SSH Host Key Checking


    Remote login using the SSH protocol is a frequent activity in today's internet world. With the SSH protocol, the onus is on the SSH client to verify the identity of the host to which it is connecting. The host identify is established by its SSH host key. Typically, the host key is auto-created during initial SSH installation setup.

    By default, the SSH client verifies the host key against a local file containing known, rustworthy machines. This provides protection against possible Man-In-The-Middle attacks. However, there are situations in which you want to bypass this verification step. This article explains how to disable host key checking using OpenSSH, a popular Free and Open-Source implementation of SSH.

    When you login to a remote host for the first time, the remote host's host key is most likely unknown to the SSH client. The default behavior is to ask the user to confirm the fingerprint of the host key.
    $ ssh peter@192.168.0.100
    The authenticity of host '192.168.0.100 (192.168.0.100)' can't be established.
    RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Are you sure you want to continue connecting (yes/no)? 

    If your answer is yes, the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts. You only need to validate the host key the first time around: in subsequent logins, you will not be prompted to confirm it again.

    Yet, from time to time, when you try to remote login to the same host from the same origin, you may be refused with the following warning message:
    $ ssh peter@192.168.0.100
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Please contact your system administrator.
    Add correct host key in /home/peter/.ssh/known_hosts to get rid of this message.
    Offending key in /home/peter/.ssh/known_hosts:3
    RSA host key for 192.168.0.100 has changed and you have requested strict checking.
    Host key verification failed.$

    There are multiple possible reasons why the remote host key changed. A Man-in-the-Middle attack is only one possible reason. Other possible reasons include:
    • OpenSSH was re-installed on the remote host but, for whatever reason, the original host key was not restored.
    • The remote host was replaced legitimately by another machine.

    If you are sure that this is harmless, you can use either 1 of 2 methods below to trick openSSH to let you login. But be warned that you have become vulnerable to man-in-the-middle attacks.

    The first method is to remove the remote host from the~/.ssh/known_hosts file. Note that the warning message already tells you the line number in the known_hosts file that corresponds to the target remote host. The offending line in the above example is line 3("Offending key in /home/peter/.ssh/known_hosts:3")

    You can use the following one liner to remove that one line (line 3) from the file.
    $ sed -i 3d ~/.ssh/known_hosts

    Note that with the above method, you will be prompted to confirm the host key fingerprint when you run ssh to login.

    The second method uses two openSSH parameters:
    • StrictHostKeyCheckin, and
    • UserKnownHostsFile.

    This method tricks SSH by configuring it to use an emptyknown_hosts file, and NOT to ask you to confirm the remote host identity key.
    $ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no peter@192.168.0.100
    Warning: Permanently added '192.168.0.100' (RSA) to the list of known hosts.
    peter@192.168.0.100's password:

    The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).

    The /dev/null file is a special system device file that discards anything and everything written to it, and when used as the input file, returns End Of File immediately.

    By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.

    The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection. Because of the null key database file, all connection is viewed as the first-time for any SSH server host. Therefore, the host key is automatically added to the host key database with no user confirmation. Writing the key to the/dev/null file discards the key and reports success.

    Please refer to this excellent article about host keys and key checking.

    By specifying the above 2 SSH options on the command line, you can bypass host key checking for that particular SSH login. If you want to bypass host key checking on a permanent basis, you need to specify those same options in the SSH configuration file.

    You can edit the global SSH configuration file (/etc/ssh/ssh_config) if you want to make the changes permanent for all users.

    If you want to target a particular user, modify the user-specific SSH configuration file (~/.ssh/config). The instructions below apply to both files.

    Suppose you want to bypass key checking for a particular subnet (192.168.0.0/24).

    Add the following lines to the beginning of the SSH configuration file.
    Host 192.168.0.*
       StrictHostKeyChecking no
       UserKnownHostsFile=/dev/null

    Note that the configuration file should have a line like Host * followed by one or more parameter-value pairs. Host *means that it will match any host. Essentially, the parameters following Host * are the general defaults. Because the first matched value for each SSH parameter is used, you want to add the host-specific or subnet-specific parameters to the beginning of the file.

    As a final word of caution, unless you know what you are doing, it is probably best to bypass key checking on a case by case basis, rather than making blanket permanent changes to the SSH configuration files.


    Refer & Thanks to: http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html

    Thursday, June 9, 2011

    IPV6 - Chapter 3 ICMPv6


    Abstract

    This white paper discusses ICMPv6 and describes the types of ICMPv6 messages.

    Introducing ICMPv6

    Internet Control Message Protocol (ICMP) is communication method for reporting packet-handling errors. ICMP for IPv6 (ICMPv6) is the latest version of ICMP. All IPv6 nodes must conduct ICMPv6 error reporting.
    ICMPv6 can be used to analyze intranet communication routes and multicast addresses. It incorporates operations from the Internet Group Management Protocol (IGMP) for reporting errors on multicast transmissions, and ICMPv6 packets are used in the IGMP extension Multicast Listener Discovery (MLD) protocol to locate linked multicast nodes. ICMPv6 is also used for operations such as packet Internet groper (ping), traceroute, and Neighbor Discovery.

    ICMPv6 message types

    Like IPv6, ICMPv6 is a network layer protocol. However, IPv6 sees ICMPv6 as an upper layer protocol because it sends its messages inside IP datagrams. The two types of ICMPv6 message are
    • error messages
    • information messages

    ICMPv6 error messages

    The ICMPv6 error messages notify the source node of a transmission error. This enables the packet's originator to implement a solution to the reported error and attempt successful transmission. If the type of error message received is unknown, the message is transferred to an upper layer protocol for processing. The type of message is identified with type values ranging from 1 to 127.
    Types of packet transmission error messages include
    • Destination Unreachable
    • Parameter Problem
    • Packet Too Big
    • Time Exceeded

    Destination Unreachable

    A router will communicate a Destination Unreachable message to the source address when a message cannot be delivered due to a cause other than congested network paths. The Destination Unreachable message signals the reason for delivery failure using one of five codes.
     
    Table 1: Destination Unreachable message codes, labels, and causes
    Error message code Error message label Cause of message
    0 No route to destination A router without a default route to the destination address generates this message.
    1 Communication with destination administratively prohibited A packet-filtering firewall generates this message when a packet is denied access to a host behind a firewall.
    2 Not a neighbor This error message is sent when the forwarding node does not share a network link with the next node on the route. It applies to packets using a route defined in the IPv6 routing header extension.
    3 Address unreachable An error resolving the IPV6 destination address to a link-layer address can trigger this message.
    4 Port unreachable The destination address generates this message when there is no transport layer protocol listening for traffic.

    Parameter Problem

    When an error with either the IPV6 header or extension headers prevents successful packet processing, the router sends a Parameter Problem message to indicate the nature of the problem to the source address.

    Packet Too Big

    The router forwards a Packet Too Big message to the source address when the transmitted packet is too large for the maximum transmission unit (MTU) link to the recipient address.

    Time Exceeded

    The router communicates a Time Exceeded message to the source address when the value of the Hop Limit field reaches zero.

    ICMPv6 information messages

    Messages with type values of 128 and above are information messages. ICMPv6 information messages, as defined in RFC 1885, can include
    • an Echo Request
    • an Echo Reply
    The Echo Request and Echo Reply messages are part of ping. The purpose of ping is to determine whether specific hosts are connected to the same network. If the type of information message received is unknown, the message should be deleted.
    IGMP and Neighbor Discovery protocol messages are also classed as information messages.

    ICMPv6 message fields

    ICMPv6 packets are located within the last extension header in the IPv6 packet, and they are identified in the previous Next Header field by a value of 58. All ICMPv6 packets contain three fields and a message body. The ICMPv6 messages fields have certain functions, as shown in the following table.
     
    Table 2: ICMPv6 message fields
    Message field Field function
    Type An 8-bit field that specifies the type of message and determines the contents of the message body. A value in the Type field from 0 to 127 indicates an error message, and a value from 128 to 255 indicates an information message.
    Code An 8-bit field that provides a numeric code for identifying the type of message.
    Checksum A 16-bit field that identifies instances of data violation in the ICMPv6 message and header. The value of the Checksum field is determined using the contents of the ICMPv6 Message fields and the IPv6 pseudoheader.
    A 16-bit field that identifies instances of data violation in the ICMPv6 message and header. The value of the Checksum field is determined using the contents of the ICMPv6 Message fields and the IPv6 pseudoheader.

    Checksum field

    Before sending an ICMP message, a system calculates a checksum to place in the Checksum field. The checksum is calculated as follows:
    • if the ICMP message contains an odd number of bytes, the system adds an imaginary trailing byte equal to zero
    • the extra byte is used in the checksum calculation but is not sent with the message
    • a pseudoheader, containing source and destination IP addresses, the payload length, and the Next Header byte for ICMP is added to the message
    • the pseudoheader is used for checksum generation only and not transmitted
    • the receiving system verifies the checksum by using the same calculation process as the sending system
    • if the checksum is correct, ICMP accepts the message
    • if the checksum is incorrect, ICMP discards the message

    Threats to message integrity

    ICMPv6 messages can be subject to malicious attacks. For example, the source address of the message may be concealed by an alternative address, the message body may be modified, or the message may be intercepted and forwarded to an address other than the intended destination.
    The ICMPv6 authentication mechanism can be applied to ICMPv6 messages to ensure that packets are sent to the intended recipient. A checksum calculation can also be generated, using the value of the data contents to safeguard the integrity of the source address, destination address, and the message body.

    Neighbor discovery

    The IPv6 Neighbor Discovery protocol incorporates the IPv4 functions of Address Resolution Protocol (ARP), ICMP Router Discovery messages, and ICMP Redirect messages to communicate information across the network. IPV6 nodes use Neighbor Discovery protocol to
    • trace the data-link layer address of local-link multicast neighbors
    • determine the accessibility of neighbors
    • monitor neighbor routers
    The Neighbor Discovery protocol utilizes five informational message types to assist in neighbor discovery
    1. Type 133 – Router Solicitation
    2. Type 134 – Router Advertisement
    3. Type 135 – Neighbor Solicitation
    4. Type 136 – Neighbor Advertisement
    5. Type 137 – Redirect

    Type 133 – Router Solicitation

    The Router Solicitation message is multicast to all routers by a host to prompt routers to generate router advertisement messages.

    Type 134 – Router Advertisement

    Routers transmit Router Advertisement messages in response to a host's Router Solicitation message. Periodically, routers use Router Advertisement messages to identify themselves to hosts on a network.

    Type 135 – Neighbor Solicitation

    A key responsibility of ICMP is the mapping of IP addresses to data-link layer addresses. It uses simple strategy to do this – a node multicasts a request to all hosts on the network and requests an Ethernet addresses corresponding to a particular IP address in a Neighbor Solicitation message.

    Type 136 – Neighbor Advertisement

    A Neighbor Advertisement message takes much the same form as a Neighbor Solicitation message. The advertisement includes the target's IP address, and through an option, it also includes the target's data-link layer address.

    Type 137 – Redirect

    ICMPv6 uses the Neighbor Redirect message to inform the originator node of a more efficient network route for delivery of the forwarded message. Routers forward the ICMPv6 message and transmit a Redirect message to the local-link address of the originator node if
    • a more effective first hop route is identified on the same local link as the originator node
    • the originator uses a global IPv6 source address to transmit a packet to a local-link neighbor
    • the packet was not addressed to the router that received it
    • the target address of the packet is not a multicast address

    Summary

    Internet Control Message Protocol for IPv6 (ICMPv6) is communication method for reporting packet-handling errors on an IPv6 network. The two message types are information messages and error messages. ICMPv6 is also used for operations such as packet Internet groper (ping), traceroute, and Neighbor Discovery.
    --
    //kiranツith 

    IPV6 - Chapter 1 - Introduction

    IPv6
    Total of 3.403×10^38 tottal address in IPV6.

    IPV6 is a new version of the internet protocol, designed as a successor to ipv4.
    The changes from IPV4 to IPV6 are predominantly in the following areas:
    1. Addressing
    2. Header Format
    3. Flow
    4. Extensions and Options
    5. Authentication and Privacy

    1. The most significant change in the upgrade from IPV4 to IPV6 is the increase in addressing space from 32 bits to 128 bits. This new addressing capability can cope with the accelerating usage of the internet. IPV6 changes the addressing types by introducing any-cast addressing and discarding the broadcast address employed by IPV4

    2. IPV4 headers contain at least 12 fields, which can vary in length from 20 to 60 bytes.
    IPV6 has simplified the header formatting structure by using a fixed length of 40byts. The reduction in the number of fields that needs to be processed allows for more effective networking routing. IPV6 changes the packet fragmentation principle by enabling fragmentation to be conducted by source node only. This also reduces the number of fields required in the packet header. The format of the packet header is simplified in IPV6 by the removal of the check-sum field. IPV6 focuses on routing packets, and the check-sums are implemented in higher level protocols, such as UDP and TCP

    3. IPV4 processes each packet individually at intermediate routers. These routers do not record packet details for future handling of similar packets. IPV6 introduces the concept if packets in a flow. A flow is series of packets in a stream of data that require special handling. An example of a flow is a  stream of real-time video data.IPV6 routers can monitor flows and log consistent information for the effective handling of flow packets.

    4. IPV4 adds options to the end if the IP header , whereas IPV6 adds options to separate extension headers. This means that, in IPV6, the option header is processes only when a packet contains options.The use of extension headers to contain options obviates the need for all routers to examine certain options.For example, in IPV6, only the source node can fragment a packet, therefore the only nodes that need to examine the fragmentation extension header are the source and destination nodes.

    5. The two security extensions employed by IPV6 are
    • authentication header
    Packet authentication is implemented through message-digest functions. The sender calculates a  message digest or hash on the packet being sent. The results of this calculation are contained in the authentication header. The packet recipient performs a hash on the received packet and compares the  result against the value in the authentication header. Matching values confirm that the packet traveled from source to destination without violation. Differing values indicate that the packet was modified during transition.

    • encapsulating security payload (ESP) header
    The ESP header can encrypt the payload field in an IPV6 packet or the entire packet, ensuring data integrity as it is forwarded across the network. Encrypting the entire packet ensures that packet data, such as the source and destination addresses, are not intercepted during transmission. Encrypted packets are transported within another IPV6 packet that functions as a security gateway.

    Header Structure of IPV4 & IPV6

    IPV4
    The IPV4 packet header has a 32 bit or 4 byte boundary.
    It contains
    • Ten fields
    Contains: Version, Header Length, Type of Service, Total length, Identifier, Flags, Fragment Offset, Time to Live, Protocol, Header Check-sum
    • Two addresses
    Source Address and Destination Address
    • Options
    Options + Padding

    IPV6
    The IPV6 packets header expands on the IPV4 header by providing 64 bit, or 8byt, boundary. All IPV6 headers are 40 bytes in total. It contains a simpler header format of
    • Six Fields
    Version, Traffic Class, Flow Label, Payload Length, Next Header & Hop Limit
    • Two Addresses
    Source Address and Destination Address

    Extension Headers
    IPV4 implements a complex method for the inclusion of options in the routing of packets. The IPV4 packet structure can vary in size from 20-60 bytes, and IPV4 options are included as extra data. As a result, options may be forwarded without being processed or be processed at each router. Such inefficient routing can lead developers to avoid the use of options.
    IPV6 implements a new variety of extension headers to improve the routing of packets with options.Instead of incorporating options into the IPV6 header, the options are placed in separate extension headers appended to the IPV6 header and identified by the Next Header field.
    Extension headers - with the exception of hop-by-hop options header - are not processed until they reach the destination address. Each extension header is a multiple of 8 octets in length, preserving the 64-bit alignment for subsequent headers.


    --
    //kiranツith

    Saturday, January 15, 2011

    Install Packages Via yum Command Using DVD / CD as Repo - CentOS (RHEL Based)



    CentOS Linux comes with CentOS-Media.repo which is used to mount the default locations for a CDROM / DVD on CentOS-5.*. You can use this repo and yum to install items directly off the DVD ISO that we release.
    Open /etc/yum.repos.d/CentOS-Media.repo file, enter:
    # vi /etc/yum.repos.d/CentOS-Media.repo
    Make sure enabled is set to 1:
    enabled=1
    Save and close the file. To use repo put your DVD and along with the other repos, enter:
    # yum --enablerepo=c5-media install pacakge-name
    To only use the DVDmedia repo, do this:
    # yum --disablerepo=\* --enablerepo=c5-media install pacakge-name
    OR use groupinstall command
    # yum --disablerepo=\* --enablerepo=c5-media groupinstall 'Virtualization'

    Monday, December 28, 2009

    What happens when you browse to a web site

    This is a perennial favorite in technical interviews: "so you type 'www.example.com' in your favorite web browser. In as much detail as you can, tell me what happens."
    Let's assume that we do this on a Linux (or other AT&T System V UNIX system). Here's what happens, in enough detail to make your eyes bleed.
    1. Your web browser invokes the gethostbyname() function to turn the hostname you entered into an IP address. (We'll get back to how this happens, exactly, in a moment.)
    2. Your web browser consults the /etc/services file to determine what well-known port HTTP resides on, and finds 80.
    3. Two more pieces of information are determined by Linux so that your browser can initiate a connection: your local IP address, and an ephemeral port number. Combined with the destination (server) IP address and port number, these four pieces of information represent what is called an Internet PCB, or protocol control block. Furthermore, the IANA defines the port range 49,152 through 65,535 for use in this capacity. Exactly how a port number is chosen from this range depends upon the Linux kernel version. The most common allocation algorithm is to simply remember the last-allocated number, and increment it by one each time a new PCB is requested. When 65,535 is reached, the algorithm loops around to 49,152. (This has certain negative security implications, and is addressed in more detail in Port Randomization by Larsen, Ericsson, et al, 2007.) Also see TCP/IP Illustrated, Volume 2: The Implementation by Wright and Stevens, 1995.
    4. Your web browser sends an HTTP GET request to the remote server. Be careful here, as you must remember that your web browser does not speak TCP, nor does it speak IP. It only speaks HTTP. It doesn't care about the transport protocol that gets its HTTP GET request to the server, nor how the server gets its answer back to it.
    5. The HTTP packet passes down the four-layer model that TCP/IP uses, from the application layer where your browser resides to the transport layer. This is a connectionless layer, with addressing based upon URLs, or uniform resource locations.
    6. The transport layer encapsulates the HTTP request inside TCP (transmission control protocol. Transport layer for transmission control, makes sense, right?) The TCP packet is then passed down to the second layer, the network layer. This is a connection-based or persistent layer, with addressing based upon port numbers. TCP does not care about IP addresses, only that some specific port on the client side is bound to a specific port on the server side.
    7. The network layer uses IP (Internet protocol), and adds an IP header to the TCP packet. The packet is then passed down to the first layer, the link layer. This is a connectionless or best-effort layer, with addressing based upon 32-bit IP addresses. Routing, but not switching, occurs at this layer.
    8. The link layer uses the Ethernet protocol. This is a connectionless layer, with addressing based upon 48-bit Ethernet addresses. Switching occurs at this layer.
    9. The kernel must determine what connection over which to send the packet. This happens by taking the IP address and consulting the routing table (seen by running netstat -rn.) First, the kernel attempts to match the destination by host address. (For example, if you have a specific route to just the one host you're trying to reach in your browser.) If this fails, then network address matching is tried. (For example, if you have a specific route to the network in which the host you're trying to reach resides.) Lastly, the kernel searches for a default route entry. This is the most common case.
    10. Now that the kernel knows the next hop, that is, the node that the packet should be handed off to, the kernel must make a physical connection to it. Routing depends upon each node in the chain having a literal electrical connection to the next node; it doesn't matter how many nodes (or hops) the packet must pass through so long as each and every one can "see" its neighbor. This is handled on the link layer, which if you'll recall uses a different addressing scheme than IP addresses. This is where ARP, or the address resolution protocol, comes into play. Let's say your machine is 1.2.3.4, and the default gateway is 1.2.3.5. The kernel will send an ARP broadcast which says, "Who has 1.2.3.5? Tell 1.2.3.4." The default gateway machine will see the ARP request and reply, saying "Hey 1.2.3.4, 1.2.3.5 is 8:0:20:4:3f:2a." The kernel places the answer in the ARP cache, which can be viewed by running arp -a. Now that this information is known, the kernel adds an Ethernet header to the packet, and places it on the wire.
    11. The default gateway receives the packet. First, it checks to see if the Ethernet address matches its own. If it does not, the packet is silently discarded (unless the interface is in promiscuous mode.) Next, it checks to see if the destination IP address matches any of its configured interfaces. In our scenario here, it does not: remember that the packet is being routed to another destination by way of this gateway. So the gateway now checks to see if it is configured to permit IP forwarding. If it is not, the packet is silently discarded. We'll assume the gateway is configured to forward IP, so now it must determine what to do with the packet. It consults its routing table, and attempts to match the destination in the same way our web browser system did a moment ago: exact host match first, then network, then default gateway. Yes, a default gateway server can itself have a default gateway. It also uses ARP in the same way as we saw a moment ago in order to reach the next hop, and pass the packet on to it. Before doing so, however, it decrements the TTL (time-to-live) field in the packet, and if it becomes 1 or 0, discards the packet and sends an ICMP TTL expired in transit message back to the sender. Each hop along the way does the same thing. Also, if the packet came in on the same interface that the gateway's routing table says the packet should go out over to reach the next hop, an ICMP redirect message is sent to the sender, instructing it to bypass this gateway and directly contact the next hop on all subsequent packets. You'll know if this happened because a new route will appear in your web browser machine's routing table.
    12. Each hop passes the packet along, until at the destination the last router notices that it has a direct route to the destination, that is, a routing table entry is matched that is not another router. The packet is then delivered to the destination server.
    13. The destination server notices that at long last the IP address in the packet is its own, that is, it resolves via ARP to the Ethernet address of the server itself. Since it's not a forwarding case, and since the IP address matches, it now examines the TCP portion of the packet to determine the destination port. It also looks at the TCP header flags, and since this is the first packet, observes that only the SYN (synchronize) flag is set. Thus, this first packet is one of three in the TCP handshake process. If the port the packet is addressed to (in our case, port 80) is not bound by a process (for example, if Apache crashed) then an ICMP port unreachable message is sent to the sender and the packet is discarded. If the port is valid, and we'll assume it is, a TCP reply is sent, with both the SYN and ACK (acknowledge) flags set.
    14. The packet passes back through the various routers, and unless source routing is specified, the path back may differ from the path used to first reach the server. The client (the machine running your web browser) receives the packet, notices that it has the SYN and ACK flags set, and contains IP and port information that matches a known PCB. It replies with a TCP packet that has only the ACK flag set.
    15. This packet reaches the server, and the server moves the connection from PENDING to ESTABLISHED. Using the mechanisms of TCP, the server now guarantees data delivery between itself and the client until such time as the connection times out, or is closed by either side. This differs sharply from UDP, where there is no handshake process and packet delivery is not guaranteed, it is only best-effort and left up to the application to figure out if the packets go there or not.
    16. Now that we have a live TCP connection, the HTTP request that started all of this may be sent over the connection to the server for processing. Depending on whether or not the HTTP server (and client) supports such, the reply may consist of only a single object (usually the HTML page) and the connection closed. If persistence is enabled, then the connection is left open for subsequent HTTP requests (for example, all of the page elements, such as images, style sheets, etc.)
    Okay, as I mentioned earlier, we will now address how the client resolves the hostname into an IP address using DNS. All of the above ARP and IP information holds true for the DNS query and replies.
    1. The gethostbyname() function must first determine how it should go about turning a hostname into an IP address. To accomplish this, it consults the /etc/nsswitch.conf file, and looks for a line beginning with hosts:. It then examines the keywords listed, and tries each of them in the order given. For the purposes of this example, we'll assume the pattern to be files dns nis.
    2. The keyword files instructs the kernel to consult the /etc/hosts file. Since the web server we're trying to reach doesn't have an entry there, the match attempt fails. The kernel checks to see if another resolution method exists, and if it does, it tries it.
    3. The next method is dns, so the kernel now consults the /etc/resolv.conf file to determine what DNS server, or name resolver, it should contact.
    4. A UDP request is sent to the first-listed name server, addressed to port 53.
    5. The DNS server receives the request. It examines it to determine if it is authoritative for the requested domain; that is, does it directly serve answers for the domain? If not, then it checks to see if recursion is permitted for the client that sent the request.
    6. If recursion is permitted, then the DNS server consults its hints file (often called named.ca) for the appropriate root DNS server to talk to. It then sends a DNS request to the root server, asking it for the authoritative server for this domain. The root domain server replies with a third DNS server's name, the authoritative DNS server for the domain. This is the server that is listed when you perform a whois on the domain name.
    7. The DNS server now contacts the authoritative DNS server, and asks for the IP address of the given hostname. The answer is cached, and the answer is returned to the client.
    8. If recursion is not supported, then the DNS server simply replies with go away or go talk to a root server. The client is then responsible for carrying on, as follows.
    9. The client receives the negative response, and sends the same DNS request to a root DNS server.
    10. The root DNS server receives the query, and since it is not configured to support recursion, but is a root server, it responds with "go ask so-and-so, that's the authoritative server for that domain." Note that this is not the final answer, but is a definite improvement over a simple go away.
    11. The client now knows who to ask, and sends the original DNS query for a third time to the authoritative server. It replies with the IP address, and the lookup process is complete.
    A few notes on things that I didn't want to clutter up the above narrative with:
    • When a network interface is first brought online, be it during boot or manually by the administrator, something called a gratuitous ARP request is broadcast. It literally asks, "who has 1.2.3.4? Tell 1.2.3.4." This looks redundant at first glance, but it actually serves a dual purpose: it allows neighboring machines to cache the new IP to Ethernet address mapping, and if another machine already has that IP address, it will reply with a typical ARP response of "Hey 1.2.3.4, 1.2.3.4 is 8:0:20:4:3f:2a." The first machine will then log an error message to the console saying "IP address 1.2.3.4 is already in use by 8:0:20:4:3f:2a." This is done to communicate to you that your Excel spreadsheet of IP addresses is wrong and should be replaced with something a bit more accurate and reliable.
    • The Ethernet layer contains a lot more complexities than I detailed above. In particular, because only one machine can be talking over the wire at a time (literally due to electrical limitations) there are various mechanisms in place to prevent collisions. The most widely used is called CSMA/CD, or Carrier Sense Multiple Access with Collision Detection, where each network card is responsible for only transmitting when a wire clear carrier signal is present. Also, should two cards start transmitting at the exact same instant, all cards are responsible for detecting the collision and reporting it to the responsible cards. They then must stop transmitting and wait a random time interval before trying again. This is the main reason for network segmentation; the more hosts you have on a single wire, the more collisions you'll get; and the more collisions you get, the slower the overall network becomes.

    Tuesday, December 15, 2009

    HOWTO : Make sure no rootkit on your Ubuntu server


    To ensure your server will not be installed rootkits or trojans as well as worm without your approval, you should check it frequently.

    ChkRootKit

    Get the chkrootkit package :

    sudo apt-get install chkrootkit

    Make a Cron Job to do the scan daily at 0700 hours :

    sudo crontab -e



    0 7 * * * /usr/sbin/chkrootkit; /usr/sbin/chkrootkit -q 2 >&1 | mail -s "Daily ChkRootKit Scan" me@mail.com

    Do a manual scan :

    sudo /usr/sbin/chkrootkit


    Rootkit Hunter (Optional)

    sudo apt-get install rkhunter

    Make a Cron Job to do the scan daily at 0500 hours :

    sudo crontab -e



    0 5 * * * rkhunter --cronjob --rwo | mail -s "Daily Rootkit Hunter Scan" me@mail.com

    Do a manual scan :

    sudo rkhunter --check


    Forensic tool to find hidden processes and ports – unhide

    Get the unhide package :

    sudo apt-get install unhide

    Make a Cron Job to do the scan daily between 0800 and 0930 hours :

    sudo crontab -e

    0 8 * * * unhide proc; unhide proc -q 2 >&1 | mail -s "Daily unhide proc Scan" me@mail.com

    30 8 * * * unhide sys; unhide sys -q 2 >&1 | mail -s "Daily unhide sys Scan" me@mail.com

    0 9 * * * unhide brute; unhide brute -q 2 >&1 | mail -s "Daily unhide brute Scan" me@mail.com

    30 9 * * * unhide-tcp; unhide-tcp -q 2 >&1 | mail -s "Daily unhide-tcp Scan" me@mail.com

    Do a manual scan :

    sudo unhide proc
    sudo unhide sys
    sudo unhide brute
    sudo unhide-tcp

    Beware :
    There will be produced some false positive by RootKit Hunter or ChkRootKit when your packages or files had been updated or have the similar behavior as the rootkit.

    Remarks :
    It is not 100% to proof that your system is away from the attack of Rootkits.

    Tuesday, December 1, 2009

    Troubleshooting Linux networking||Module||Driver problems


    Network-related problems on your Linux machine can be hard to resolve because they go beyond the trusted environment of your Linux box. But, as a Linux administrator, you can help your network administrator by applying the right technologies. In this article you'll learn how to troubleshoot network related driver problems.
    It's easy to determine that a problem you're encountering is a network problem -- if your computer can't communicate with other computers, something is wrong on the network. But, it may be harder to find the source of the problem. You need to begin by analyzing the chain of elements involved in network communication.
    If your host needs to communicate with another host in the network, the following conditions need to be met:
    1. The network card is installed and available in the operating system, i.e., the correct driver is loaded.
    2. The network card has an IP address assigned to it.
    3. The computer can communicate with other hosts in the same network.
    4. The computer can communicate with other hosts in other networks.
    5. The computer can communicate with other hosts using their host names
    Troubleshooting network driver issues
    To communicate with other computers on the network, your computer needs a network interface. The method your computer uses to obtain such a network interface is well designed. During the system boot, the kernel probes the different interfaces that are available and typically on the PCI bus, finds a network card. Next, it determines which driver is needed to address the network card and if the driver is available, it will address the network card. Following that, the udev daemon (udevd) is started in the initial boot phase of your computer and it creates the network device for you. In a simple computer with one network interface only, this will typically be the eth0 device but as you will read later, other interfaces can also be used. Once the interface has been loaded, the next stage can be passed in which the network card gets an IP address.
    As was just discussed, there are some items involved to load the driver for the network card correctly.
    1. The kernel probes the PCI bus.
    2. Based on the information it finds on the PCI bus, a driver is loaded.
    3. Udev creates the network interface which you need to actually use the network interface.
    To fix network card problems, begin by determining if the network card was really found on the PCI-bus. To do that, use the lspci command. Here is an example output of lspci:
    JBO:~ # lspci 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) 00:07.7 System peripheral: VMware Inc Virtual Machine Communication Interface (rev 10) 00:0f.0 VGA compatible controller: VMware Inc Abstract SVGA II Adapter 00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01) 02:00.0 USB Controller: Intel Corporation 82371AB/EB/MB PIIX4 USB 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10) 02:02.0 Multimedia audio controller: Ensoniq ES1371 [AudioPCI-97] (rev 02) 02:03.0 USB Controller: VMware Inc Abstract USB2 EHCI Controller JBO:~ #
    Here, at PCI address 02:01.0 an Ethernet network card is found. The network card is an AMD 79c970 and (between square brackets) the PCnet32 kernel module is needed to address this network card.
    The next step is to check the hardware configuration as reflected in the /sys tree. Every PCI device has it's configuration stored in there, and for the network card in this example, it is stored in the directory /sys/bus/pci/devices/0000:02:01.0, which reflects the address of the device on the PCI bus. Here is an example of the contents of this directory:
    JBO:/sys/bus/pci/devices/0000:02:01.0 # ls -l total 0 -rw-r--r-- 1 root root 4096 Oct 18 07:08 broken_parity_status -r--r--r-- 1 root root 4096 Oct 17 07:50 class -rw-r--r-- 1 root root 256 Oct 17 07:50 config -r--r--r-- 1 root root 4096 Oct 17 07:50 device lrwxrwxrwx 1 root root 0 Oct 17 07:51 driver -> ../../../../bus/pci/drivers/pcnet32 -rw------- 1 root root 4096 Oct 18 07:08 enable lrwxrwxrwx 1 root root 0 Oct 18 07:08 firmware_node -> ../../../LNXSYSTM:00/device:00/PNP0A03:00/device:06/device:08 -r--r--r-- 1 root root 4096 Oct 17 07:50 irq -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpulist -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpus -r--r--r-- 1 root root 4096 Oct 17 07:53 modalias -rw-r--r-- 1 root root 4096 Oct 18 07:08 msi_bus drwxr-xr-x 3 root root 0 Oct 17 07:50 net -r--r--r-- 1 root root 4096 Oct 18 07:08 numa_node drwxr-xr-x 2 root root 0 Oct 18 07:08 power -r--r--r-- 1 root root 4096 Oct 17 07:50 resource -rw------- 1 root root 128 Oct 18 07:08 resource0 -r-------- 1 root root 65536 Oct 18 07:08 rom lrwxrwxrwx 1 root root 0 Oct 17 07:50 subsystem -> ../../../../bus/pci -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_device -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_vendor -rw-r--r-- 1 root root 4096 Oct 17 07:51 uevent -r--r--r-- 1 root root 4096 Oct 17 07:50 vendor JBO:/sys/bus/pci/devices/0000:02:01.0 #
    The most interesting item for troubleshooting is the symbolic link to the driver directory. In this example it points to the pcnet32 driver and using the information that lspci provided, we know this is the correct driver.
    In most cases, the driver that Linux installs will work fine. In some cases it doesn't. When configuring a Dell server with a Broadcom network card, I have seen severe problems, where a ping command that used a jumbo frame packet was capable of causing kernel panic. One of the first things to suspect in that case, is the same kernel driver for the network card. A nice troubleshooting approach is to start by finding out which version of the driver you are using. You can accomplish this by using the modinfo command on the driver itself. Here is an example of modinfo on the pcnet32 driver:
    JBO:/ # modinfo pcnet32 filename: /lib/modules/2.6.27.19-5-pae/kernel/drivers/net/pcnet32.ko license: GPL description: Driver for PCnet32 and PCnetPCI based ethercards author: Thomas Bogendoerfer srcversion: 261B01C36AC94382ED8D984 alias: pci:v00001023d00002000sv*sd*bc02sc00i* alias: pci:v00001022d00002000sv*sd*bc*sc*i* alias: pci:v00001022d00002001sv*sd*bc*sc*i* depends: mii supported: yes vermagic: 2.6.27.19-5-pae SMP mod_unload modversions 586 parm: debug:pcnet32 debug level (int) parm: max_interrupt_work:pcnet32 maximum events handled per interrupt (int) parm: rx_copybreak:pcnet32 copy breakpoint for copy-only-tiny-frames (int) parm: tx_start_pt:pcnet32 transmit start point (0-3) (int) parm: pcnet32vlb:pcnet32 Vesa local bus (VLB) support (0/1) (int) parm: options:pcnet32 initial option setting(s) (0-15) (array of int) parm: full_duplex:pcnet32 full duplex setting(s) (1) (array of int) parm: homepna:pcnet32 mode for 79C978 cards (1 for HomePNA, 0 for Ethernet, default Ethernet (array of int)
    The modinfo command will give you different useful information for each module. If a version number is included, check for available updated versions and download and install them.
    When working with some hardware, you should also check what kind of module is used. If the module is open source, in general it's fine as open source modules are thoroughly checked by the Linux community. If the module is proprietary, there may be incompatibilities between the kernel and the particular module. If this is the case, your kernel is flagged as "tainted." A tainted kernel is a kernel that has some modules loaded that are not controlled by the Linux kernel community. To find out if this is the case on your system, you can check the contents of the /proc/sys/kernel/tainted file. If this file has a 0 as its contents, no proprietary modules are loaded. If it has a 1, proprietary modules are loaded and you may be able to fix the situation if you replace the proprietary module with an open source module.
    The information in this article should help you in fixing driver related issues.

    Monday, October 26, 2009

    Linux Securirty Notes 14: Squid notes 8: Cache Hierarchies & Transparent Proxy

    Squid cache Hierarchies:

    Parent-Child Hierarchies:


        Here we will define the parent child cache peering relationship. The cache will be located in two servers i.e, there will be a main cache server called as parent server, and a local cache server named as client. All the local users will query the client server for the cache and the client will pull the cache from the parent. Here the only one squid server will be connected to externel network and for auditing purpose all the traffic from other squid servers should be routed through the single proxy server.

    Configuring the cache peering
    Scenario:
    192.168.1.0/24 Local network will query the cache client client.cache.domain.com:3128  -> which will query the parent.cache.domain.com for the cache which is not found locally.The parent cache server is the only one that connected to internet.The client uses port 3130 a UDP protocol to find out the requested cache is present in cache-parent.
    Note:-
        Squid supports multiple protocols for caching, CARP(cache array routing protocol). ICP, HTCP (Hyper text caching protocol), Cache-Digests etc. Make sure that the port 4827, 3130 & 3128 are opened in the firewall if the client cache is behind a firewall.

    Configuring the cache-peer

    In client.cache.domian.com
    # vim squid.conf
    --------------
    cache_peer    parent.cache.domian.com      parent    8080    3130    default
    --------------
    # relaod squid

        This will make the client.cache.domain.com to query parent.cache.domain.com using the cache peer port 3130 and proxy port 8080 with default settings.
    Test by setting the proxy variable to the client.cache.domain.com for subnets. The client will try to pull page from the client.cache.domain.com, if the page found in client.cache.domain.com then the squid running on client.cache.domain.com will contact the parent.cache.domain.com for the cache. Check the access.log file for the request path.

    Sibling Hierarchies:

    Sibling-cache Relationship
        This is sharing the cache among the multiple squid servers. i.e, if a server gets query for a particular cache and if it is not found in its history the server will query the sibling proxy servers for the same cache. So implementing this feature will save the bandwidth usage and time taken for downloading the page. In this case the cache will be shared among the sibling servers.

    Configuration:
    In cleint.cache.domain.com

    # vim squid.conf
    ---------
    cache_peer    parent.cache.domian.com      sibling    8080    3130    default
    ---------
    # reload squid


    In parent.cache.domain.com
    # vim squid.conf
    -----------
    cache_peer    clinet.cache.domian.com      sibling    8080    3130    default
    -----------
    # reload squid

        This will make both the servers to act as siblings. And will share the cache if that present in any one of the server before it queries the internet.
        Test by setting up the proxy variables in the client and check the access.log file in both the server. We will be able to trace the query from the sibling servers here.

    Limiting the squid service access:
        For limiting the cache access usage.
    # vim squid.conf
    ---------
    acl all src 0.0.0.0/0.0.0.0
    acl connection_limit    maxconn    10
    http_access deny connection_limit    all
    ---------
    # reload squid

        If a user attempts to create more than 10 connection to the server the squid server will deny the new access. Test using wget.


    Transparent Proxy

        Local network (No proxy settings in browser)-> proxy/firewall (http accelerator and Iptables) -> Internet
    Configuring the Transparent Proxy in proxy/Firewall box
    Step 1:
    # echo 1 > /proc/sys/net/ipv4/ip_forward
    # iptables -t nat -A PREROUTING -i eth1 tcp --dport 80 -j REDIRECT --to-port 3128

        (what ever the packates coming to the box with the dstination port 80 should redirect to port 3128)
    Step2:
        Now we have to configure the squid as transparent proxy by adding the http acceleration feature (only in 2.x series). In new version this is not needed.

    # vim squid.conf
    -----------
    httpd_accel_host     virtual
    httpd_accel_port     80
    httpd_accel_with_proxy    on
    httpd_accel_uses_host_header    on
    -----------
    # relaod squid

        Now this features makes the proxy to run in transparent mode.
    Note:-
    # squid -v

        Command will show the compiled options of squid server when installed. Here we will find a key directive that is used to interact with IPtables to while enabling the transparent proxy, named "--enable-linux-netfilter". This feature makes the squid to integrate with iptables while running in transparent mode.

    Linux Securirty Notes 14: Squid notes 7: Bandwidth Management using Delay Pools

    Squid - Delay Pools Bandwidth Management
        This feature is used to restrict the bandwidth usage for the user community. It  has been introduced in ver 2.x

    Implementing bandwidth management using delay pool

    Delay Pools have 3 different class for restriction

    1. class 1 pool allows to restrict the rate of bandwidth for large downloads.
        This makes the restriction of rate of download of a large file.
    Implementing Class1 delay pool
    Steps:
    1.  Define the ACL for the delay pool
    2.  Defines the number of delay pools (delay_pools 1)
    3.  Define the class of delay pool    (delay_calss 1 1)
    4.  Set the parameters for the pool number (delay_parameres 1 restore_rate/max_size). Once the request exceds the max_size then the squid will make the bandwidth to the given restore_rate for a user/source(The mesurement is taken in "bytes")  eg:- delay_parameters 1 20000/15000
    5.  Enable the delay_access to include the feature (delay_access)
    Configure the class 1 delay pool
    # vim squid.conf
    --------
    acl    bw_users    src    192.168.1.0/24      # The acl defined for the Network    
    delay_pools    1                                         # This will tell the delay pool number
    delay_calss    1 1                                       # This defines the delay pool number 1 is a class1 type delay pool
    delay_parameters    1    20000/15000        #This is delay parameter for pool number 1 which has the restore rate of 20000 when the usage hits 15000 bytes
    delay_access    1    allow    bw_users      # This is the access tag which tie to the acl bw_users
    --------
    # relaod the squid

        This will make the bandwidth usage for any one of the src when execeds the download limit of 15K, restores the rate of download to 20K/s.
    Test the configuration by downloading files using wget
    Limitations of class pool1:
        If we have a bandwidth of 1500000 Bytes and if we configure a rate of 20000 bytes per sec then the max simultaneous connections will be 1500000/20000 = 75. This will max out the connection if we have a large number of connections from the src
     
    2. Class 2 pool allows to set the bandwidth usage to a sustained rate

        Using the class 2 pool we can overcome the Limitation of max out in class1. So here we can implement the Bandwidth in aggregate rate.


    Configure the class 2 pool

    If we have a Link with bandwidth of -(1.5Mb/s) 1544000 bytes/s of bandwidth
    If we need to limit or set ceiling of 62500 bytes/s (500k/s) as bandwidth for the netusage
    and 10% of the ceiling for each users

    # vim squid.conf
    ----------
    acl    bw_users    src    192.168.1.0/24 # The acl defined for the Network
    delay_pools    1                                    # Number of Pool
    delay_class    1 2                                  # Defines the class of pool for the Pool Number 1
    delay_parametes    1 62500/62500 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a indivigual cieling of  #10% of the cieling (Any given time the users will be restricted to the 10% of the cieling bandwidth 500k)
    delay_access  1  allow  bw_users        # This is the access tag which tie to the acl bw_users
    ----------
    # reload squid

        Test the rate of bandwidth using wget. Here we can see that all the rate will be restricted to 10% of the cieling from the begning for all the src. This makes the rest of the bandwidth free for usage of other purpose i.e, Out of 1.5M we have taken a cieling of .5M for internel network and  we have told to squid that each request from src should get a 10% of .5M of bandwidth.
    Note:-
     In the class1 pool the restriction of the bandwidth was started only after meeting the max size of download. But in class 2 instead of the max download size here we defined a ceiling and user is restricted to it from the beginning.
       
    3. Class3 pool allows to restrict the bandwidth usage for subnets
        This will implement the bandwidth management with aggregate rate per subnets. i.e, the class2 pool with subnet-based ceiling

    Configuring the class 3 pool
    # vim squid.conf
    ----------
    acl    bw_users    src    192.168.1.0/24 # The acl defined for the Network
    delay_pools    1                                    # Number of Pool
    delay_class    1 3                                 # Defines the class of pool for the Pool Number 1
    delay_parametes    1 62500/62500 31250/31250 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a subnets cieling of 50% of the cieling (Any given time the request from the each subnets will be restricted to the 50% of the cieling bandwidth 500k and each users in subnet will have 20% of the bandwidth rate of subnet cieling)
    delay_access  1  allow  bw_users       # This is the access tag which tie to the acl bw_users
    ----------
    # reload squid

        This makes the squid to make the bandwidth usage 50% per subnet(Incase if we have 2 subnets in our network) and each user will get 20% of the subnet cieling. (i.e, out of 1.5M we have taken a cieling of .5M. the subnet cieling will share 50% of this .5M clieing(.25M). In each subnet the users will get 20%(.05M) of bandwidth of the subnet cieling (.25M)).
     
    Delay Pool class2 with Time based ACL:
        This will implement the bandwidth management only during the business hours.

    Configure the Class2 pool with time restriction
    # vim squid.conf
    ----------
    acl    bw_users src 192.168.1.0/24         # The acl defined for the Network
    acl work_time time MTWHF 09:00-18:00
    delay_pools    1                                      # Number of Pool
    delay_class    1 2                                    # Defines the class of pool for the Pool Number 1
    delay_parametes    1 62500/62500 25000/25000 # each user has given an average of 25000 bytes of bandwidth
    delay_access  1  allow work_time         # This is the access tag which tie to the acl all and work_time.
    ----------
    # reload squid

        This will make the class 2 pool to be activated only while the office hours. Test by changing the time in the squid servers after configuring the class 2 pool with time period.