Showing posts with label Tips n tricks. Show all posts
Showing posts with label Tips n tricks. Show all posts

Tuesday, March 20, 2012

Wiping a hard drive

Ever needed to completely wipe out critical data off a hard drive? As we all know, mkfs doesn't erase a lot (you already knew this, right?). mkfs and its variants (such as mkfs.ext3 and mke2fs) only get rid of a few important data structures on the filesystem. But the data is still there! For a SCSI disk connected as /dev/sdb, a quick:
dd if=/dev/sdb | strings
will let anyone recover text data from a supposedly erased hard drive. Binary data is more complicated to retrieve, but the same basic principle applies: the data was not completely erased.
To make things harder for the bad guys, an old trick was to use the 'dd' command as a way to erase a drive (note that this command WILL erase your disk!):
dd if=/dev/zero of=/dev/sdb
There's one problem with this: newer, more advanced, techniques make it possible to retrieve data that was replaced with a bunch of 0's. To make it more difficult, if not impossible, for the bad guys to read data that was previously stored on a disk, Red Hat ships the 'shred' utility as part of the coreutils RPM package. Launching 'shred' on a disk or a partition will write repeatedly (25 times by default) to all locations on the disk (be careful with this one too!):
shred /dev/sdb
This is currently known to be a very safe way to delete data from a hard drive before, let's say, you ship it back to the manufacturer for repair or before you sell it on eBay!

Refer :
http://www.redhat.com/magazine/026dec06/features/tips_tricks/

Wednesday, December 14, 2011

New features of yum in RHEL-6.1 now that it's released


A few things you might not know about RHEL-6.1+ yum

  • Search is more user friendly

    As we maintain yum we are always looking for the "minor" changes that can make a big difference to the user, and this is probably one of the biggest minor changes. As of late RHEL-5 and RHEL-6.0 "yum search" was great for finding obscure things that you knew something about but with 6.1 we've hopefully made it useful for finding the "everyday" packages you can't remember the exact name of. We did this by excluding a lot of the "extra" hits, when you get a large search result. For instance "yum search kvm manager" is pretty useless in RHEL-6.0, but in RHEL-6.1 you should find what you want very quickly.
    Example commands:

    yum search kvm manager
    yum search python url
    
  • The updateinfo command The "yum-security" or "yum-plugin-security" package has been around since early RHEL-5, but the RHEL-6.1 update has introduced the "updateinfo" command to make things a little easier to use, and you can now easily view installed security errata (to more easily make sure you are secure). We've also added a few new pieces of data to the RHEL updateinfo data. Probably the most significant is that as well as errata being marked "security" or not they are now tagged with their "severity". So you can automatically apply only "critical" security updates, for example.
Example commands:

yum updateinfo list security all
yum update-minimal --sec-severity=critical


The versionlock command As with the previous point we've had "yum-plugin-version" for a long time, but now we've made it easier to use and put all it's functions under a single "versionlock" sub-command. You can now also "exclude" specific versions you don't want, instead of locking to known good specific ones you had tested.
Example commands:

# Lock to the version of yum currently installed.
yum versionlock add yum
# Opposite, disallow versions of yum currently available:
yum versionlock exclude yum
yum versionlock list
yum versionlock delete yum\*
yum versionlock clear
# This will show how many "excluded" packages are in each repo.
yum repolist -x .


Manage your own .repo variables This is actually available in RHEL-6.0, but given that almost nobody knows about it I thought I'd share it here. You can put files in "/etc/yum/vars" and then use the names of those files are variables in any yum configuration, just like $basearch or $releasever. There is also a special $uuid variable, so you can track individual machines if you want to.

yum has it's own DB
Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable addition is that we now store the "installed_by" and "changed_by" attributes, this could be worked out from "yum history" before, but now it's easily available directly from the installed package.
  • Example commands:
    yumdb 
    yumdb info yum 
    yumdb set installonly keep kernel-2.6.32-71.7.1.el6 
    yumdb sync
  • Additional data in "yum history" Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable additions are that we now store the command line and we store a "transaction file" that you can use on other machines.
    Example commands:

    yum history
    yum history pkgs yum
    yum history summary
    
    yum history undo last
    
    yum history addon-info 1    config-main
    yum history addon-info last saved_tx
    
    "yum install" is now fully kickstart compatible As of RHEL-6.0 there was one thing you could do in a kickstart package list that you couldn't do in "yum install" and that was to "remove" packages with "-package". As of the RHEL-6.1 yum you can do that, and we also added that functionality to upgrade/downgrade/remove. Apart from anything else, this should make it very easy to turn the kickstart package list into "yum shell" files (which can even be run in kickstart's %post).
    Example commands:

     yum install 'config(postfix) >= 2.7.0'
     yum install MTA
     yum install '/usr/kerberos/sbin/*'
     yum -- install @books -javanotes
    
    Easier to change yum configuration We tended to get a lot of feature requests for a plugin to add a command line option so the user could change a single yum.conf variable, and we had to evaluate those requests for general distribution based on how much we thought all users would want/need them. With the RHEL-6.1 yum we created the --setopt so that any option can be changed easily, without having to create a specific bit of code. There were also some updates to the yum-config-manager command.
    Example commands:
    yum --setopt=alwaysprompt=false upgrade yum yum-config-manager yum-config-manager --enable myrepo yum-config-manager --add-repo https://example.com/myrepo.repo
    Working towards managing 10 machines easily yum is the best way to manage a single machine, but it isn't quite as good at managing 10 identical machines. While the RHEL-6.1 yum still isn't great at this, we've made a few improvements that should help significantly. The biggest is probably the "load-ts" command, and the infrastructure around it, which allows you to easily create a transaction on one machine, test it, and then "deploy" it to a number of other machines. This is done with checking on the yum side that the machines started from the same place (via. rpmdb versions), so that you know you are doing the same operation.
    Also worth noting is that we have added a plugin hook to the "package verify" operation, allowing things like "puppet" to hook into the verification process. A prototype of what that should allow those kinds of tools to do was written by Seth Vidal here.
    Example commands:

    # Find the current rpmdb version for this machine (available in RHEL-6.0)
    yum version nogroups
    # Completely re-image a machine, or dump it's "package image"
    yum-debug-dump
    yum-debug-restore 
        --install-latest
        --ignore-arch
        --filter-types=install,remove,update,downgrade
    
    # This is the easiest way to get a transaction file without modifying the rpmdb
    echo | yum update blah
    ls ${TMPDIR:-/tmp}/yum_save_tx-* | sort | tail -1
    
    # You can now load a transaction and/or see the previous transaction from the history
    yum load-ts /tmp/yum_save_tx-2011-01-17-01-00ToIFXK.yumtx
    yum -q history addon-info last saved_tx > my-yum-saved-tx.yumtx

    
    
    

    Tuesday, November 22, 2011

    Linux filtering and transforming text - Command Line Reference


    View defined directives in a config file:


    grep . -v '^#' /etc/vsftpd/vsftpd.conf


    View a line matching “Initializing CPU” and 5 lines immediately after this match using 'grep' and 'sed'


    grep -A 5 "Initializing CPU#1" dmesg
    sed -n 101,110p /var/log/cron - Displays from Line 101 to 110 of the log file


    Exclude the empty lines:

    grep -v '^#' /etc/vsftpd/vsftpd.conf | grep .
    grep -v '^#' /etc/ssh/sshd_config | sed -e /^$/d
    grep -v '^#' /etc/ssh/sshd_config | awk /./{print}

    More examples of GREP :

    grep smug *.txt {search *.txt files for 'smug'}
    grep BOB tmpfile
    {search 'tmpfile' for 'BOB' anywhere in a line}
    grep -i -w blkptr *
    {search files in CWD for word blkptr, any case}
    grep run[- ]time *.txt
    {find 'run time' or 'run-time' in all txt files}
    who | grep root
    {pipe who to grep, look for root}
    grep smug files
    {search files for lines with 'smug'}
    grep '^smug' files
    {'smug' at the start of a line}
    grep 'smug files
    {'smug' at the end of a line}
    grep '^smug files
    {lines containing only 'smug'}
    grep '\^s' files
    {lines starting with '^s', "\" escapes the ^}
    grep '[Ss]mug' files
    {search for 'Smug' or 'smug'}
    grep 'B[oO][bB]' files 
    {search for BOB, Bob, BOb or BoB }
    grep '^ files
    {search for blank lines}
    grep '[0-9][0-9]' file
    {search for pairs of numeric digits}grep '^From: ' /usr/mail/$USER {list your mail}
    grep '[a-zA-Z]'
    {any line with at least one letter}
    grep '[^a-zA-Z0-9]
    {anything not a letter or number}
    grep '[0-9]\{3\}-[0-9]\{4\}'
    {999-9999, like phone numbers}
    grep '^.
    {lines with exactly one character}
    grep '"smug"'
    {'smug' within double quotes}
    grep '"*smug"*'
    {'smug', with or without quotes}
    grep '^\.'
    {any line that starts with a Period "."}
    grep '^\.[a-z][a-z]'
    {line start with "." and 2 lc letters}


    Grep command symbols used to search files:

    ^ (Caret) = match expression at the start of a line, as in ^A.
    $ (Question) = match expression at the end of a line, as in A$.
    \ (Back Slash) = turn off the special meaning of the next character, as in \^.
    [ ] (Brackets) = match any one of the enclosed characters, as in [aeiou].
    Use Hyphen "-" for a range, as in [0-9].
    [^ ] = match any one character except those enclosed in [ ], as in [^0-9].
    . (Period) = match a single character of any value, except end of line.
    * (Asterisk) = match zero or more of the preceding character or expression.
    \{x,y\} = match x to y occurrences of the preceding.
    \{x\} = match exactly x occurrences of the preceding.
    \{x,\} = match x or more occurrences of the preceding.

    Monday, October 24, 2011

    How To Disable SSH Host Key Checking


    Remote login using the SSH protocol is a frequent activity in today's internet world. With the SSH protocol, the onus is on the SSH client to verify the identity of the host to which it is connecting. The host identify is established by its SSH host key. Typically, the host key is auto-created during initial SSH installation setup.

    By default, the SSH client verifies the host key against a local file containing known, rustworthy machines. This provides protection against possible Man-In-The-Middle attacks. However, there are situations in which you want to bypass this verification step. This article explains how to disable host key checking using OpenSSH, a popular Free and Open-Source implementation of SSH.

    When you login to a remote host for the first time, the remote host's host key is most likely unknown to the SSH client. The default behavior is to ask the user to confirm the fingerprint of the host key.
    $ ssh peter@192.168.0.100
    The authenticity of host '192.168.0.100 (192.168.0.100)' can't be established.
    RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Are you sure you want to continue connecting (yes/no)? 

    If your answer is yes, the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts. You only need to validate the host key the first time around: in subsequent logins, you will not be prompted to confirm it again.

    Yet, from time to time, when you try to remote login to the same host from the same origin, you may be refused with the following warning message:
    $ ssh peter@192.168.0.100
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Please contact your system administrator.
    Add correct host key in /home/peter/.ssh/known_hosts to get rid of this message.
    Offending key in /home/peter/.ssh/known_hosts:3
    RSA host key for 192.168.0.100 has changed and you have requested strict checking.
    Host key verification failed.$

    There are multiple possible reasons why the remote host key changed. A Man-in-the-Middle attack is only one possible reason. Other possible reasons include:
    • OpenSSH was re-installed on the remote host but, for whatever reason, the original host key was not restored.
    • The remote host was replaced legitimately by another machine.

    If you are sure that this is harmless, you can use either 1 of 2 methods below to trick openSSH to let you login. But be warned that you have become vulnerable to man-in-the-middle attacks.

    The first method is to remove the remote host from the~/.ssh/known_hosts file. Note that the warning message already tells you the line number in the known_hosts file that corresponds to the target remote host. The offending line in the above example is line 3("Offending key in /home/peter/.ssh/known_hosts:3")

    You can use the following one liner to remove that one line (line 3) from the file.
    $ sed -i 3d ~/.ssh/known_hosts

    Note that with the above method, you will be prompted to confirm the host key fingerprint when you run ssh to login.

    The second method uses two openSSH parameters:
    • StrictHostKeyCheckin, and
    • UserKnownHostsFile.

    This method tricks SSH by configuring it to use an emptyknown_hosts file, and NOT to ask you to confirm the remote host identity key.
    $ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no peter@192.168.0.100
    Warning: Permanently added '192.168.0.100' (RSA) to the list of known hosts.
    peter@192.168.0.100's password:

    The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).

    The /dev/null file is a special system device file that discards anything and everything written to it, and when used as the input file, returns End Of File immediately.

    By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.

    The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection. Because of the null key database file, all connection is viewed as the first-time for any SSH server host. Therefore, the host key is automatically added to the host key database with no user confirmation. Writing the key to the/dev/null file discards the key and reports success.

    Please refer to this excellent article about host keys and key checking.

    By specifying the above 2 SSH options on the command line, you can bypass host key checking for that particular SSH login. If you want to bypass host key checking on a permanent basis, you need to specify those same options in the SSH configuration file.

    You can edit the global SSH configuration file (/etc/ssh/ssh_config) if you want to make the changes permanent for all users.

    If you want to target a particular user, modify the user-specific SSH configuration file (~/.ssh/config). The instructions below apply to both files.

    Suppose you want to bypass key checking for a particular subnet (192.168.0.0/24).

    Add the following lines to the beginning of the SSH configuration file.
    Host 192.168.0.*
       StrictHostKeyChecking no
       UserKnownHostsFile=/dev/null

    Note that the configuration file should have a line like Host * followed by one or more parameter-value pairs. Host *means that it will match any host. Essentially, the parameters following Host * are the general defaults. Because the first matched value for each SSH parameter is used, you want to add the host-specific or subnet-specific parameters to the beginning of the file.

    As a final word of caution, unless you know what you are doing, it is probably best to bypass key checking on a case by case basis, rather than making blanket permanent changes to the SSH configuration files.


    Refer & Thanks to: http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html

    Thursday, October 6, 2011

    Setup of VSFTPD virtual users

    If you are hosting several web sites, for security reason, you may want the webmasters to access their own files only. One of the good way is to give them FTP access by setup of VSFTPD virtual users and directories. This article describes how you can do that easily.
    (See also: Setup of VSFTPD virtual users – another approach)
    1. Installation of VSFTPD
    For Red Hat, CentOS and Fedora, you may install VSFTPD by the command
    # yum install vsftpd
    For Debian and Ubuntu,
    # apt-get install vsftpd
    2. Virtual users and authentication
    We are going to use pam_userdb to authenticate the virtual users. This needs a username / password file in `db’ format – a common database format. We need `db_load’ program. For CentOS, Fedora, you may install the package `db4-utils’:
    # yum install db4-utils
    For Ubuntu,
    # apt-get install db4.2-util
    To create a `db’ format file, first create a plain text file `virtual-users.txt’ with the usernames and passwords on alternating lines:
    mary
    123456
    jack
    654321
    Then execute the following command to create the actual database:
    # db_load -T -t hash -f virtual-users.txt /etc/vsftpd/virtual-users.db
    Now, create a PAM file /etc/pam.d/vsftpd-virtual which uses your database:
    auth required pam_userdb.so db=/etc/vsftpd/virtual-users
    account required pam_userdb.so db=/etc/vsftpd/virtual-users
    3. Configuration of VSFTPD
    Create a configuration file /etc/vsftpd/vsftpd-virtual.conf,
    # disables anonymous FTP
    anonymous_enable=NO
    # enables non-anonymous FTP
    local_enable=YES
    # activates virtual users
    guest_enable=YES
    # virtual users to use local privs, not anon privs
    virtual_use_local_privs=YES
    # enables uploads and new directories
    write_enable=YES
    # the PAM file used by authentication of virtual uses
    pam_service_name=vsftpd-virtual
    # in conjunction with 'local_root',
    # specifies a home directory for each virtual user
    user_sub_token=$USER
    local_root=/var/www/virtual/$USER
    # the virtual user is restricted to the virtual FTP area
    chroot_local_user=YES
    # hides the FTP server user IDs and just display "ftp" in directory listings
    hide_ids=YES
    # runs vsftpd in standalone mode
    listen=YES
    # listens on this port for incoming FTP connections
    listen_port=60021
    # the minimum port to allocate for PASV style data connections
    pasv_min_port=62222
    # the maximum port to allocate for PASV style data connections
    pasv_max_port=63333
    # controls whether PORT style data connections use port 20 (ftp-data)
    connect_from_port_20=YES
    # the umask for file creation
    local_umask=022
    4. Creation of home directories
    Create each user’s home directory in /var/www/virtual, and change the owner of the directory to the user `ftp’:
    # mkdir /var/www/virtual/mary
    # chown ftp:ftp /var/www/virtual/mary
    5. Startup of VSFTPD and test
    Now we can start VSFTPD by the command:
    # /usr/sbin/vsftpd /etc/vsftpd/vsftpd-virtual.conf
    and test the FTP access of a virtual user:
    # lftp -u mary -p 60021 192.168.1.101
    The virtual user should have full access to his directory.

    Thursday, June 9, 2011

    IPV6 - Chapter 2 - Addressing architecture in IPv6


    Abstract

    This white paper discusses IPv6 addressing and compares it with IPv4. It outlines the three types of IPv6 address – unicast, multicast, and anycast. It also discusses types of unicast addresses and IEEE 802 addressing.

    IPv6 addressing

    Comparing IPv4 and IPv6 addresses

    IPv4 contains a 32-bit address space, which provides for 2^32 – or 4,294,967,296 – addresses. The IPv6 128-bit address spaces allows for 2^128 – or 340,282,366,920,938,463,374,607,431,768,211,456 or 3.4 × 10^38 – possible addresses.
    The current allocation of IPv6 addresses is determined according to the value of their high order bits. These values are fixed and also known as a Format Prefix (FP). 

    Table 1: The current allocation of IPv6 address space
    Status of allocation space FP in binary Fraction of the address space
    Reserved 0000 0000 1/256
    Unassigned 0000 0001 1/256
    Reserved for Network Service Access Point (NSAP) allocation 0000 001 1/128
    Reserved for Internet Packet Exchange (IPX) allocation 0000 010 1/128
    Unassigned 0000 011 1/128
    Unassigned 0000 1 1/32
    Unassigned 0001 1/16
    Aggregatable global unicast addresses 001 1/8
    Unassigned 010 1/8
    Unassigned 011 1/8
    Unassigned 100 1/8
    Unassigned 101 1/8
    Unassigned 110 1/8
    Unassigned 1110 1/16
    Unassigned 1111 0 1/32
    Unassigned 1111 10 1/64
    Unassigned 1111 110 1/128
    Unassigned 1111 1110 0 1/512
    Link-local unicast addresses 1111 1110 10 1/1024
    Site-local unicast addresses 1111 1110 11 1/1024
    Multicast addresses 1111 1111 1/256

    Address representation

    IPv4 addresses use dotted decimal notation, whereby the address is divided into octets. Each octet in an IPv4 address is assigned a decimal value from 0 to 255. IPv6 addresses are represented using the format
    X:X:X:X:X:X:X:X:
    Each X represents a 16-bit section of the 128-bit address and is converted to four hexadecimal digits separated by colons. For example,
    ECBD:00D3:0000:B33D:8785:0000:1734:F51C
    This address represented in binary is:
    1110110010111101000000001101001100000000000000001011001100 1111011000011110000101000000000000000000010111001101001111 010100011100
    The IPv6 address is divided into 16-bit boundaries
    110110010111101 0000000011010011 000000000000000 011001100111101 1000011110000101 000000000000000 001011100110100 111010100011100
    The first four digits conform to the unassigned prefix value 1110, which represents 1/16 of all IPv6 addresses.
    In instances where a zero is the first digit in the 4-digit hexadecimal number, the zero can be omitted. When an IPv6 address consists of a series of zeros, a double colon (::) can be used in place of the zeros. For example, you would use 3450::3 to display the address
    3450:0:0:0:0:0:0:3
    The IPv6 prefix specifies the bits within the address that are assigned fixed values. The prefix can also be the network identifier. IPv6 prefixes for address ranges, routes, and subnet identifiers are expressed in address/prefix-length notation. This uses the structure of classless interdomain routing (CIDR) notation employed by IPv4. For example, a subnet prefix would be expressed as
    ECBD:A2:0:1A3C::/64

    Types of IPv6 address

    IPv4 uses broadcast addressing, whereby every network node must process all broadcast requests. This is an inefficient routing process, as most broadcasts are not relevant to the majority of nodes on the network.
    The three types of addressing employed by IPv6 are
    • unicast
    • multicast
    • anycast

    Unicast addresses

    Unicast addresses are 128-bit fields that identify a single IPv6 interface. They contain information that refers exclusively to the associated interface, and packets sent to a unicast address will be forwarded to the relevant interface.
    Like IPv4 addresses, unicast addresses can be split into two parts:
    • the subnet prefix
    • the interface ID
    The subnet prefix is used to route the packet. The distance of the router from the specified interface address influences the length of the subnet prefix, which in turn can determine the length of the interface ID. The interface ID identifies the network node associated with the target IPv6 interface.

    Multicast addresses

    IPv6 multicast addresses identify a set of interfaces that are usually assigned to different nodes. Packets transmitted to a multicast address are sent to all interfaces linked to that address. Multicast addresses cannot be the source address for a packet – they can only be the destination address.
    A graphic of the structure of the IPv6 multicast address format, from RFC 2373. It includes the following fields: Flgs field, Scope field, Group ID field, and a reserved field with a value of zero.
    IPv6 multicast address format
    IPv6 multicast addresses consist of four fields. The Format Prefix field is an 8-bit field that identifies the packet's destination as a multicast address. The Flgs field contains 4-bit flags. The fourth or lowest order bit of the Flgs field specifies whether the multicast address is transient or well known – the first three bits have not yet been assigned a function.
    The Scope field specifies the scope of the multicast address group. The scope can range from including nodes on only the local network to nodes at any IPv6 global address. The multicast group is represented by the value in the 112-bit Group ID field.
    Table 2: Values for the Scope field
    Defined Value Type of scope
    0 Reserved
    1 Node-local scope
    2 Link-local scope
    5 Site-local scope
    8 Organization-local scope
    E Global scope
    F Reserved

    Anycast addresses

    Anycast addressing identifies a set of interfaces that are usually assigned to different nodes. Multiple nodes can share anycast addresses, but only one node can receive the packets from the anycast address. Packets transmitted to an anycast address are sent to the nearest interface associated with that address. Anycast addresses are assigned to routers, rather than hosts, and they cannot be used as source addresses.
    • Internet service provider (ISP)
    • routing domain
    • subnet

    Types of unicast addresses

    The types of IPv6 unicast addresses include
    • aggregatable global unicast addresses
    • link-local addresses
    • site-local addresses
    • special IPv6 addresses
    Aggregatable global unicast addresses are intended to provide efficient routing and are similar to the public IPv4 address. They share the structure of site-local address after the first 48 bits.
    The aggregatable global unicast address structure contains the following five fields:
    • the 13-bit Top-Level Aggregation Identifier field
    • the 8-bit Reserved field
    • the 24-bit Next-Level Aggregation Identifier field
    • the 16-bit Site-Level Aggregation Identifier field
    • the 64-bit Interface ID field
    A graphic that represents the structure of the aggregatable global unicast address. The graphic displays each field name and the relevant bit size.
    Aggregatable global unicast address structure
    Both site-local and link-local addresses are types of local-use unicast address. Nodes use link-local addresses to communicate with neighbor nodes on the same network link. They are also used for Neighbor Discovery protocol transmissions. Site-local addresses are used to transmit messages to nodes within the same site. Such addresses are not accessible to nodes on external sites.

    Special IPv6 addresses

    The two types of special IPv6 addresses are
    • unspecified address
    • loopback address
    The unspecified address does not identify an interface or target address. It can be used as a source to confirm the identity of an undefined address and to mark the absence of an IPv6 address. Loopback addresses identify a loopback interface, whereby a node can use a loopback address to send a message to itself.

    Compatibility addresses

    The compatibility addresses are designed to assist with the transition from IPv4 to IPv6. This form of address can support both host types and contains the following addresses:
    • IPv4-compatibile address
    • IPv4-mapped address
    • 6to4 address

    IPv6-compatible addresses

    IPv6/IPv4 nodes that use IPv6 for communication use IPv4-compatible addresses. IPv4-compatibile addresses can be the destination address for IPv6 messages. For IPv6 messages to be forwarded to this destination, they are encapsulated within IPv4 headers.

    IPv4-mapped addresses

    The IPv4-mapped addresses represent an IPv4 node that can only be used on the IPv4 infrastructure to an IPv6 node. This type of address cannot be the source or destination address of an IPv6 packet.

    6to4 addresses

    The 6to4 address is a tunneling technique that enables two nodes that support both IPv4 and IPv6 to communicate.

    IEEE 802 addresses

    Institute of Electrical and Electronics Engineers (IEEE) 802 addresses are 48-bit addresses that identify network adapters. They consist of two parts:
    • the 24-bit company ID
    • the 24-bit extension ID, or board ID
    The graphic displays the 48-bit IEEE 802 address structure. It is divided into two 24-bit fields: the IEEE administered company ID field and the Manufacturer selected extension ID field.
    48-bit IEEE 802 address
    The company ID identifies the manufacturer of the network adapter, and the extension ID is the unique global identifier of the network adapter.
    IEEE 802 has two defined bits:
    • Universal/Local (UL)
    • Individual/Group (I/G)
    The UL bit in the first byte specifies whether the IEEE 802 address is administered locally or universally, and the I/G bit in the first byte indicates whether the address is unicast (local) or multicast (group).
    The IEEE 802 address is also known as the
    • hardware address
    • media access control (MAC) address
    • physical address

    IEEE EUI-64 addresses

    The IEEE EUI-64 address provides a larger addressing space than the IEEE 802 address by increasing the extension ID to 40 bits. IEEE 802 addresses can be mapped to EUI-64 addresses by adding the 16-bits 0×FFFE – or 1111 1111 1111 1110 – between the company ID and the extension ID.
    A graphic that represents the conversion of 48-bit IEEE 802 addresses to EUI 64 addresses. The bit size is increased to 64-bits with the insertion of two 8-bit fields: OxFF and OxFE.
    Converting IEEE 802 to an EUI-64 address
    EUI-64 addresses can be mapped to an interface identifier for IPv6 unicast addresses by replacing the 1 with 0 or 0 with 1 in the U/L bit in the EUI-64 address. To map an IEEE 802 address to an IPv6 interface identifier, the IEEE 802 address must first be converted to EUI-64.

    Summary

    IPv6 addresses are 128-bits long, and they are assigned to interfaces and sets of interfaces. Unicast addresses identify single interfaces, and they are divided into the subnet prefix and the interface ID. The subnet prefix is used to specify routing, and the interface ID identifies the target interface. Multicast addresses identify a set of interfaces that are usually assigned to different nodes. Anycast addresses also identify a set of interfaces assigned to different nodes, but a packet with an anycast address is routed to the nearest interface having that address. Types of unicast address include aggregatable global unicast addresses, special addresses, and compatibility addresses.

    --
    kiranツith

    IPV6 - Chapter 1 - Introduction

    IPv6
    Total of 3.403×10^38 tottal address in IPV6.

    IPV6 is a new version of the internet protocol, designed as a successor to ipv4.
    The changes from IPV4 to IPV6 are predominantly in the following areas:
    1. Addressing
    2. Header Format
    3. Flow
    4. Extensions and Options
    5. Authentication and Privacy

    1. The most significant change in the upgrade from IPV4 to IPV6 is the increase in addressing space from 32 bits to 128 bits. This new addressing capability can cope with the accelerating usage of the internet. IPV6 changes the addressing types by introducing any-cast addressing and discarding the broadcast address employed by IPV4

    2. IPV4 headers contain at least 12 fields, which can vary in length from 20 to 60 bytes.
    IPV6 has simplified the header formatting structure by using a fixed length of 40byts. The reduction in the number of fields that needs to be processed allows for more effective networking routing. IPV6 changes the packet fragmentation principle by enabling fragmentation to be conducted by source node only. This also reduces the number of fields required in the packet header. The format of the packet header is simplified in IPV6 by the removal of the check-sum field. IPV6 focuses on routing packets, and the check-sums are implemented in higher level protocols, such as UDP and TCP

    3. IPV4 processes each packet individually at intermediate routers. These routers do not record packet details for future handling of similar packets. IPV6 introduces the concept if packets in a flow. A flow is series of packets in a stream of data that require special handling. An example of a flow is a  stream of real-time video data.IPV6 routers can monitor flows and log consistent information for the effective handling of flow packets.

    4. IPV4 adds options to the end if the IP header , whereas IPV6 adds options to separate extension headers. This means that, in IPV6, the option header is processes only when a packet contains options.The use of extension headers to contain options obviates the need for all routers to examine certain options.For example, in IPV6, only the source node can fragment a packet, therefore the only nodes that need to examine the fragmentation extension header are the source and destination nodes.

    5. The two security extensions employed by IPV6 are
    • authentication header
    Packet authentication is implemented through message-digest functions. The sender calculates a  message digest or hash on the packet being sent. The results of this calculation are contained in the authentication header. The packet recipient performs a hash on the received packet and compares the  result against the value in the authentication header. Matching values confirm that the packet traveled from source to destination without violation. Differing values indicate that the packet was modified during transition.

    • encapsulating security payload (ESP) header
    The ESP header can encrypt the payload field in an IPV6 packet or the entire packet, ensuring data integrity as it is forwarded across the network. Encrypting the entire packet ensures that packet data, such as the source and destination addresses, are not intercepted during transmission. Encrypted packets are transported within another IPV6 packet that functions as a security gateway.

    Header Structure of IPV4 & IPV6

    IPV4
    The IPV4 packet header has a 32 bit or 4 byte boundary.
    It contains
    • Ten fields
    Contains: Version, Header Length, Type of Service, Total length, Identifier, Flags, Fragment Offset, Time to Live, Protocol, Header Check-sum
    • Two addresses
    Source Address and Destination Address
    • Options
    Options + Padding

    IPV6
    The IPV6 packets header expands on the IPV4 header by providing 64 bit, or 8byt, boundary. All IPV6 headers are 40 bytes in total. It contains a simpler header format of
    • Six Fields
    Version, Traffic Class, Flow Label, Payload Length, Next Header & Hop Limit
    • Two Addresses
    Source Address and Destination Address

    Extension Headers
    IPV4 implements a complex method for the inclusion of options in the routing of packets. The IPV4 packet structure can vary in size from 20-60 bytes, and IPV4 options are included as extra data. As a result, options may be forwarded without being processed or be processed at each router. Such inefficient routing can lead developers to avoid the use of options.
    IPV6 implements a new variety of extension headers to improve the routing of packets with options.Instead of incorporating options into the IPV6 header, the options are placed in separate extension headers appended to the IPV6 header and identified by the Next Header field.
    Extension headers - with the exception of hop-by-hop options header - are not processed until they reach the destination address. Each extension header is a multiple of 8 octets in length, preserving the 64-bit alignment for subsequent headers.


    --
    //kiranツith

    Tuesday, February 8, 2011

    Linux Process - Tips


    What is a Process ?
    When a program is read from disk into memory and its execution begins, the currently executing image is called a process

    PID
    The process ID is the number between 1 - 32767 by default (Certainly customizable). To set the limit, as root run the following,You can set the value higher (up to 2^22 on 32-bit machines: 4,194,304)
    with:
    # echo 4194303 > /proc/sys/kernel/pid_max
    you can instead add a line to your /etc/sysctl.conf. You would do this instead of the above  commands. This will be the more natural solution for such systems, but you'll need to reboot the  system or use the sysctl program for it to take effect. You need to append the following to your  /etc/sysctl.conf:
    #Allow for more PIDs (to reduce rollover problems); may break some programs
    kernel.pid_max = 4194303
    PPID
    Each process in linux has a parent. Once the system starts a single process is created, called INIT,  whose PID is 1. The INIT process then begins to start the system up, creating processes as needed. These newly created processes may start other processes but the ultimate parent is always INIT.

    PS command

    "ps" command with no option shows: 
    • Process ID (PID)
    • The terminal (TTY)
    • The amount of CPU time that the process has accumulated (TIME)
    • The command used (CMD)

    "ps -f" give more info(full option). It displays the below options in addition to above
    • The Parent PID (PPID)
    • The process start time (STIME)
    • The user ID (UID)
    Process ither than your own cab be also checked with "ps" command using -e option. This displays all the process in the system
    Also:
    • -u to ps command limit display to users
    • -g limits display to groups
    • -p limits display to PID
    • -t limit display to terminal

    Managing process In Linux
    Usually 2 ways used to manage the process
    1. Using a signaling system - (sending signals to process using commands kill,skill and pkill)
         Signals are the software interupts used to communicate status and information amongst processes. The TERM signal can be caught or ignored. The KILL signal "9" is not able to be caught or ignored,  and causes immediate termination of the process. Ctrl C sends the INT (2)(Interrupt) signal to the  process- this is the reason of Process termination. "Ctrl \" sends quit (3) signal to running process TERM (Terminate)signal (15) is the default signal send to the process while running the kill  command.
    HUP signal is generated by a modem hangup. It often tell a daemon to reconfigure (restart) itself. 
    kill -1 (kills the shell and logs out).


    2. Using /proc interface
         Much of the processess information is available to a user through a special interface known as  /proc file system. Every process running in Linux system has an correspondence directory in proc file system. The /proc file system does not exist on disk. It is an interface to the running system and  present kernel. It gives kernel and process information in an easy to access manner. Every process  that runs on a Linux system has a corresponding directory in /proc named with the PID of the process.

    Managing process using /proc:
    There is wealth of information about a running proces in its /proc entry. Most of this information is meant for use by programs like ps, So we need to do some pre-processing before we can view it. You can use "tr" command to do it. By translating ASCII NUL characters to LF (Line feed) characters, we  can get a meaningful display.
    Eg:-
    # tr '\0' '\n' < /proc/1223/environ
    This example shows the details about the process 1233.

    "environ" is the file which contains the environment details of the process.
    "cwd" folder shows the current working directory
    "fd" contains the links to every file that a process may have opened. This directory called fd (File  Descriptor). File Descriptor is a number used by a program to identify an open file. Each process in /proc file system will have a "fd". This is a vital information for a system administrator trying to  manage a large and complex system. For instance, a file system may not be unmounted if any process  has a file opened in that file system. By checking /proc, and administrator can determine and  resolve the problem

    Eg:
    # umount /home
    Umount: /home: device us busy
    #ls -al /proc/*/fd | grep home
    This will show what all process opened the file /home
    Killing a Job
    # kill -9 %1
    The % should be added before the job number. This will make shell to replace job number with the process ID

    Log Files, Errors and Status

    Syslog facilities:
    Authpriv
    Corn
    Daemon
    Kern
    lpr
    Mail
    News
    Uucp

    User
    Local0 - Local7


    Syslog Priority:
    emerg -  Emergency condition, such as an imminent system crash, usually broadcast to all users
    alert -    Condition that should be corrected immediately, such as a corrupted system database
    crit -     Critical condition, such as a hardware error
    err -      Ordinary error
    warning - Warning
    notice -  Condition that is not an error, but possibly should be handled in a special way
    info -    Informational message
    debug -  Messages that are used when debugging programs
    none -   Do not send messages from the indicated facility to the selected file. For example, specifying
    *.debug;mail.none sends all messages except mail messages to the selected file.

    Note:-
    Logrotate keeps 4 weeks of logs before the oldest log is rottated out or deleted. Syslog entries all  share a common format. The entry starts with the date and time, followd by the name of the system  which logged message.

    CORE Error handling:
    When unexpected errors occur, the system may create a core file. A core file contains a copy of the  memory image of the process at the time that the error occurred. It is named "core" because the mail  system emory was originally called core memory, as it was made up of ferrite donuts that were wired  together through their holes, or cores.

    A core file can be used to autospy a dead pricess. Even if you are not a programmer, and do not have  the access to core analysis tools, core files can still be used to find information that may help  you to identify the cause if the program's death.

    The first thing to do with a "core" file is use the "file" command to determine what program caused  the core and what (if any) signal initiated the dumping of core. Core files are normally called core or  core.xxxx where "xxxx" is the PID of the process before it died. Using "man 7 signal" will bring up a  list of signals. By this mean we can determine the issue and also the author can be notified if there is  any kind og bugs (If any Invalid memory reference error occurs).

    strings Command:
         Strings program displays printable strings from a binary file. Using strings on a core file, you can  display all of the strings included in the core image. At the end of the core file will be the process  environment. This includes the command used to start program.
    This information can give vital clues to the case of death. Looking through the core file for pathnames  can also give information about the configuration files and shared libraries required to run the program.

    Customizing the Shell
         In Bash, there are 4 prompt strings used. All of them are able to customize. These strings are  represented by the environment variables PS1, PS2, PS3 and PS4. The normal command prompt,  which is displayed to indicate the shell is ready for a new command, is found in the PS1 variable. Should a command require more than a single line of input, the secondary prompt string PS2 is  displayed. This can be seen when typing in flow control statements interactvely.
    The select statement uses PS3 to display the prompt for the generated menu. The default is "#?"
    Finally, the PS4 prompt is used when debugging shell scripts. The shell allows an exection trace,  showing each command as it is executed. This is enabled by using -x option to the shell, or using "set  -x" at the start of the script.


    PS3 and PS4 can be set to any text. The text is displayed with no change. There is no way to place  variable text within these strings. However, PS1 and PS2 can have test that is evaluated each time the  prompt is displayed. This can be done with the $(command) syntax, or with a special set of  characters used specifically for the purpose.

    Some notes about Linux File System
         The structure of a file system determines its use and the manner in which commands and utilities  interact with it. This is especially true of management commands that change or effect the file system. Beacause of this we need to explore the structure of a Linux file system before we can look at the file  system management commands.

    All Linux filesystem have a similar logical structure as far as the user or system commands are  concerned. This is achieved by the file system driver logic in the Linux Kernel, regardless of the  underlying data layout. "A file system usually consists of a master information table called the  superblock, a list of file summary information blocks, called inodes, and the data blocks assosiated  with your data.

    Every filesystem has its own root directory, which is always identified by inode number 2. This is the  first usable inode in a Linux filesystem. This directory is special, in that it can be used to attach the  filesystem to the main, or root filesystem. The directory on the root or parent filesystem at which the  new filesystem is attached is called the mount point.

    /dev files:
         A /dev entry looks like any other file except that it does not have a size. Instead it has a major and  minor device number, and a block or character designation. The major number identifies which  device driver is being used. There are two kind of device drivers: Block and Character, each with  their own set of major numbers.

    The minor number identifies the sub-device or operation for the device driver. For example, a tape  drive may have different minor numbers for operation in compressed and uncompressed mode. There are a number of general-purpose devices as well. The /dev/bull file is also known as the "bit bucket"  because it will take anything that is written to it and discard it. It is often used to discard unwanted  error messages or to test commands. A similar file is /dev/zero which does the same for writes, but  when read will return as many NUL (Hex 00) characters as you ask to read. This is often used to  create zero-filled files for testing or for database initialization.

    lost+found Directory:
         Every file system requires a directory called lost+found in the root directory of the filesystem. This is used by the system when checking and rebuilding a corrupted file system. Files that have  inodes, but no directory entry, are moved to the lost+found directory. If there are files in this  directory, they will be named with the inode number, as an indication that the file system has suffered  some damage.

    Saturday, January 15, 2011

    Install Packages Via yum Command Using DVD / CD as Repo - CentOS (RHEL Based)



    CentOS Linux comes with CentOS-Media.repo which is used to mount the default locations for a CDROM / DVD on CentOS-5.*. You can use this repo and yum to install items directly off the DVD ISO that we release.
    Open /etc/yum.repos.d/CentOS-Media.repo file, enter:
    # vi /etc/yum.repos.d/CentOS-Media.repo
    Make sure enabled is set to 1:
    enabled=1
    Save and close the file. To use repo put your DVD and along with the other repos, enter:
    # yum --enablerepo=c5-media install pacakge-name
    To only use the DVDmedia repo, do this:
    # yum --disablerepo=\* --enablerepo=c5-media install pacakge-name
    OR use groupinstall command
    # yum --disablerepo=\* --enablerepo=c5-media groupinstall 'Virtualization'

    Monday, December 28, 2009

    What happens when you browse to a web site

    This is a perennial favorite in technical interviews: "so you type 'www.example.com' in your favorite web browser. In as much detail as you can, tell me what happens."
    Let's assume that we do this on a Linux (or other AT&T System V UNIX system). Here's what happens, in enough detail to make your eyes bleed.
    1. Your web browser invokes the gethostbyname() function to turn the hostname you entered into an IP address. (We'll get back to how this happens, exactly, in a moment.)
    2. Your web browser consults the /etc/services file to determine what well-known port HTTP resides on, and finds 80.
    3. Two more pieces of information are determined by Linux so that your browser can initiate a connection: your local IP address, and an ephemeral port number. Combined with the destination (server) IP address and port number, these four pieces of information represent what is called an Internet PCB, or protocol control block. Furthermore, the IANA defines the port range 49,152 through 65,535 for use in this capacity. Exactly how a port number is chosen from this range depends upon the Linux kernel version. The most common allocation algorithm is to simply remember the last-allocated number, and increment it by one each time a new PCB is requested. When 65,535 is reached, the algorithm loops around to 49,152. (This has certain negative security implications, and is addressed in more detail in Port Randomization by Larsen, Ericsson, et al, 2007.) Also see TCP/IP Illustrated, Volume 2: The Implementation by Wright and Stevens, 1995.
    4. Your web browser sends an HTTP GET request to the remote server. Be careful here, as you must remember that your web browser does not speak TCP, nor does it speak IP. It only speaks HTTP. It doesn't care about the transport protocol that gets its HTTP GET request to the server, nor how the server gets its answer back to it.
    5. The HTTP packet passes down the four-layer model that TCP/IP uses, from the application layer where your browser resides to the transport layer. This is a connectionless layer, with addressing based upon URLs, or uniform resource locations.
    6. The transport layer encapsulates the HTTP request inside TCP (transmission control protocol. Transport layer for transmission control, makes sense, right?) The TCP packet is then passed down to the second layer, the network layer. This is a connection-based or persistent layer, with addressing based upon port numbers. TCP does not care about IP addresses, only that some specific port on the client side is bound to a specific port on the server side.
    7. The network layer uses IP (Internet protocol), and adds an IP header to the TCP packet. The packet is then passed down to the first layer, the link layer. This is a connectionless or best-effort layer, with addressing based upon 32-bit IP addresses. Routing, but not switching, occurs at this layer.
    8. The link layer uses the Ethernet protocol. This is a connectionless layer, with addressing based upon 48-bit Ethernet addresses. Switching occurs at this layer.
    9. The kernel must determine what connection over which to send the packet. This happens by taking the IP address and consulting the routing table (seen by running netstat -rn.) First, the kernel attempts to match the destination by host address. (For example, if you have a specific route to just the one host you're trying to reach in your browser.) If this fails, then network address matching is tried. (For example, if you have a specific route to the network in which the host you're trying to reach resides.) Lastly, the kernel searches for a default route entry. This is the most common case.
    10. Now that the kernel knows the next hop, that is, the node that the packet should be handed off to, the kernel must make a physical connection to it. Routing depends upon each node in the chain having a literal electrical connection to the next node; it doesn't matter how many nodes (or hops) the packet must pass through so long as each and every one can "see" its neighbor. This is handled on the link layer, which if you'll recall uses a different addressing scheme than IP addresses. This is where ARP, or the address resolution protocol, comes into play. Let's say your machine is 1.2.3.4, and the default gateway is 1.2.3.5. The kernel will send an ARP broadcast which says, "Who has 1.2.3.5? Tell 1.2.3.4." The default gateway machine will see the ARP request and reply, saying "Hey 1.2.3.4, 1.2.3.5 is 8:0:20:4:3f:2a." The kernel places the answer in the ARP cache, which can be viewed by running arp -a. Now that this information is known, the kernel adds an Ethernet header to the packet, and places it on the wire.
    11. The default gateway receives the packet. First, it checks to see if the Ethernet address matches its own. If it does not, the packet is silently discarded (unless the interface is in promiscuous mode.) Next, it checks to see if the destination IP address matches any of its configured interfaces. In our scenario here, it does not: remember that the packet is being routed to another destination by way of this gateway. So the gateway now checks to see if it is configured to permit IP forwarding. If it is not, the packet is silently discarded. We'll assume the gateway is configured to forward IP, so now it must determine what to do with the packet. It consults its routing table, and attempts to match the destination in the same way our web browser system did a moment ago: exact host match first, then network, then default gateway. Yes, a default gateway server can itself have a default gateway. It also uses ARP in the same way as we saw a moment ago in order to reach the next hop, and pass the packet on to it. Before doing so, however, it decrements the TTL (time-to-live) field in the packet, and if it becomes 1 or 0, discards the packet and sends an ICMP TTL expired in transit message back to the sender. Each hop along the way does the same thing. Also, if the packet came in on the same interface that the gateway's routing table says the packet should go out over to reach the next hop, an ICMP redirect message is sent to the sender, instructing it to bypass this gateway and directly contact the next hop on all subsequent packets. You'll know if this happened because a new route will appear in your web browser machine's routing table.
    12. Each hop passes the packet along, until at the destination the last router notices that it has a direct route to the destination, that is, a routing table entry is matched that is not another router. The packet is then delivered to the destination server.
    13. The destination server notices that at long last the IP address in the packet is its own, that is, it resolves via ARP to the Ethernet address of the server itself. Since it's not a forwarding case, and since the IP address matches, it now examines the TCP portion of the packet to determine the destination port. It also looks at the TCP header flags, and since this is the first packet, observes that only the SYN (synchronize) flag is set. Thus, this first packet is one of three in the TCP handshake process. If the port the packet is addressed to (in our case, port 80) is not bound by a process (for example, if Apache crashed) then an ICMP port unreachable message is sent to the sender and the packet is discarded. If the port is valid, and we'll assume it is, a TCP reply is sent, with both the SYN and ACK (acknowledge) flags set.
    14. The packet passes back through the various routers, and unless source routing is specified, the path back may differ from the path used to first reach the server. The client (the machine running your web browser) receives the packet, notices that it has the SYN and ACK flags set, and contains IP and port information that matches a known PCB. It replies with a TCP packet that has only the ACK flag set.
    15. This packet reaches the server, and the server moves the connection from PENDING to ESTABLISHED. Using the mechanisms of TCP, the server now guarantees data delivery between itself and the client until such time as the connection times out, or is closed by either side. This differs sharply from UDP, where there is no handshake process and packet delivery is not guaranteed, it is only best-effort and left up to the application to figure out if the packets go there or not.
    16. Now that we have a live TCP connection, the HTTP request that started all of this may be sent over the connection to the server for processing. Depending on whether or not the HTTP server (and client) supports such, the reply may consist of only a single object (usually the HTML page) and the connection closed. If persistence is enabled, then the connection is left open for subsequent HTTP requests (for example, all of the page elements, such as images, style sheets, etc.)
    Okay, as I mentioned earlier, we will now address how the client resolves the hostname into an IP address using DNS. All of the above ARP and IP information holds true for the DNS query and replies.
    1. The gethostbyname() function must first determine how it should go about turning a hostname into an IP address. To accomplish this, it consults the /etc/nsswitch.conf file, and looks for a line beginning with hosts:. It then examines the keywords listed, and tries each of them in the order given. For the purposes of this example, we'll assume the pattern to be files dns nis.
    2. The keyword files instructs the kernel to consult the /etc/hosts file. Since the web server we're trying to reach doesn't have an entry there, the match attempt fails. The kernel checks to see if another resolution method exists, and if it does, it tries it.
    3. The next method is dns, so the kernel now consults the /etc/resolv.conf file to determine what DNS server, or name resolver, it should contact.
    4. A UDP request is sent to the first-listed name server, addressed to port 53.
    5. The DNS server receives the request. It examines it to determine if it is authoritative for the requested domain; that is, does it directly serve answers for the domain? If not, then it checks to see if recursion is permitted for the client that sent the request.
    6. If recursion is permitted, then the DNS server consults its hints file (often called named.ca) for the appropriate root DNS server to talk to. It then sends a DNS request to the root server, asking it for the authoritative server for this domain. The root domain server replies with a third DNS server's name, the authoritative DNS server for the domain. This is the server that is listed when you perform a whois on the domain name.
    7. The DNS server now contacts the authoritative DNS server, and asks for the IP address of the given hostname. The answer is cached, and the answer is returned to the client.
    8. If recursion is not supported, then the DNS server simply replies with go away or go talk to a root server. The client is then responsible for carrying on, as follows.
    9. The client receives the negative response, and sends the same DNS request to a root DNS server.
    10. The root DNS server receives the query, and since it is not configured to support recursion, but is a root server, it responds with "go ask so-and-so, that's the authoritative server for that domain." Note that this is not the final answer, but is a definite improvement over a simple go away.
    11. The client now knows who to ask, and sends the original DNS query for a third time to the authoritative server. It replies with the IP address, and the lookup process is complete.
    A few notes on things that I didn't want to clutter up the above narrative with:
    • When a network interface is first brought online, be it during boot or manually by the administrator, something called a gratuitous ARP request is broadcast. It literally asks, "who has 1.2.3.4? Tell 1.2.3.4." This looks redundant at first glance, but it actually serves a dual purpose: it allows neighboring machines to cache the new IP to Ethernet address mapping, and if another machine already has that IP address, it will reply with a typical ARP response of "Hey 1.2.3.4, 1.2.3.4 is 8:0:20:4:3f:2a." The first machine will then log an error message to the console saying "IP address 1.2.3.4 is already in use by 8:0:20:4:3f:2a." This is done to communicate to you that your Excel spreadsheet of IP addresses is wrong and should be replaced with something a bit more accurate and reliable.
    • The Ethernet layer contains a lot more complexities than I detailed above. In particular, because only one machine can be talking over the wire at a time (literally due to electrical limitations) there are various mechanisms in place to prevent collisions. The most widely used is called CSMA/CD, or Carrier Sense Multiple Access with Collision Detection, where each network card is responsible for only transmitting when a wire clear carrier signal is present. Also, should two cards start transmitting at the exact same instant, all cards are responsible for detecting the collision and reporting it to the responsible cards. They then must stop transmitting and wait a random time interval before trying again. This is the main reason for network segmentation; the more hosts you have on a single wire, the more collisions you'll get; and the more collisions you get, the slower the overall network becomes.

    Tuesday, December 15, 2009

    HOWTO : Make sure no rootkit on your Ubuntu server


    To ensure your server will not be installed rootkits or trojans as well as worm without your approval, you should check it frequently.

    ChkRootKit

    Get the chkrootkit package :

    sudo apt-get install chkrootkit

    Make a Cron Job to do the scan daily at 0700 hours :

    sudo crontab -e



    0 7 * * * /usr/sbin/chkrootkit; /usr/sbin/chkrootkit -q 2 >&1 | mail -s "Daily ChkRootKit Scan" me@mail.com

    Do a manual scan :

    sudo /usr/sbin/chkrootkit


    Rootkit Hunter (Optional)

    sudo apt-get install rkhunter

    Make a Cron Job to do the scan daily at 0500 hours :

    sudo crontab -e



    0 5 * * * rkhunter --cronjob --rwo | mail -s "Daily Rootkit Hunter Scan" me@mail.com

    Do a manual scan :

    sudo rkhunter --check


    Forensic tool to find hidden processes and ports – unhide

    Get the unhide package :

    sudo apt-get install unhide

    Make a Cron Job to do the scan daily between 0800 and 0930 hours :

    sudo crontab -e

    0 8 * * * unhide proc; unhide proc -q 2 >&1 | mail -s "Daily unhide proc Scan" me@mail.com

    30 8 * * * unhide sys; unhide sys -q 2 >&1 | mail -s "Daily unhide sys Scan" me@mail.com

    0 9 * * * unhide brute; unhide brute -q 2 >&1 | mail -s "Daily unhide brute Scan" me@mail.com

    30 9 * * * unhide-tcp; unhide-tcp -q 2 >&1 | mail -s "Daily unhide-tcp Scan" me@mail.com

    Do a manual scan :

    sudo unhide proc
    sudo unhide sys
    sudo unhide brute
    sudo unhide-tcp

    Beware :
    There will be produced some false positive by RootKit Hunter or ChkRootKit when your packages or files had been updated or have the similar behavior as the rootkit.

    Remarks :
    It is not 100% to proof that your system is away from the attack of Rootkits.

    Tuesday, December 1, 2009

    Troubleshooting Linux networking||Module||Driver problems


    Network-related problems on your Linux machine can be hard to resolve because they go beyond the trusted environment of your Linux box. But, as a Linux administrator, you can help your network administrator by applying the right technologies. In this article you'll learn how to troubleshoot network related driver problems.
    It's easy to determine that a problem you're encountering is a network problem -- if your computer can't communicate with other computers, something is wrong on the network. But, it may be harder to find the source of the problem. You need to begin by analyzing the chain of elements involved in network communication.
    If your host needs to communicate with another host in the network, the following conditions need to be met:
    1. The network card is installed and available in the operating system, i.e., the correct driver is loaded.
    2. The network card has an IP address assigned to it.
    3. The computer can communicate with other hosts in the same network.
    4. The computer can communicate with other hosts in other networks.
    5. The computer can communicate with other hosts using their host names
    Troubleshooting network driver issues
    To communicate with other computers on the network, your computer needs a network interface. The method your computer uses to obtain such a network interface is well designed. During the system boot, the kernel probes the different interfaces that are available and typically on the PCI bus, finds a network card. Next, it determines which driver is needed to address the network card and if the driver is available, it will address the network card. Following that, the udev daemon (udevd) is started in the initial boot phase of your computer and it creates the network device for you. In a simple computer with one network interface only, this will typically be the eth0 device but as you will read later, other interfaces can also be used. Once the interface has been loaded, the next stage can be passed in which the network card gets an IP address.
    As was just discussed, there are some items involved to load the driver for the network card correctly.
    1. The kernel probes the PCI bus.
    2. Based on the information it finds on the PCI bus, a driver is loaded.
    3. Udev creates the network interface which you need to actually use the network interface.
    To fix network card problems, begin by determining if the network card was really found on the PCI-bus. To do that, use the lspci command. Here is an example output of lspci:
    JBO:~ # lspci 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) 00:07.7 System peripheral: VMware Inc Virtual Machine Communication Interface (rev 10) 00:0f.0 VGA compatible controller: VMware Inc Abstract SVGA II Adapter 00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01) 02:00.0 USB Controller: Intel Corporation 82371AB/EB/MB PIIX4 USB 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10) 02:02.0 Multimedia audio controller: Ensoniq ES1371 [AudioPCI-97] (rev 02) 02:03.0 USB Controller: VMware Inc Abstract USB2 EHCI Controller JBO:~ #
    Here, at PCI address 02:01.0 an Ethernet network card is found. The network card is an AMD 79c970 and (between square brackets) the PCnet32 kernel module is needed to address this network card.
    The next step is to check the hardware configuration as reflected in the /sys tree. Every PCI device has it's configuration stored in there, and for the network card in this example, it is stored in the directory /sys/bus/pci/devices/0000:02:01.0, which reflects the address of the device on the PCI bus. Here is an example of the contents of this directory:
    JBO:/sys/bus/pci/devices/0000:02:01.0 # ls -l total 0 -rw-r--r-- 1 root root 4096 Oct 18 07:08 broken_parity_status -r--r--r-- 1 root root 4096 Oct 17 07:50 class -rw-r--r-- 1 root root 256 Oct 17 07:50 config -r--r--r-- 1 root root 4096 Oct 17 07:50 device lrwxrwxrwx 1 root root 0 Oct 17 07:51 driver -> ../../../../bus/pci/drivers/pcnet32 -rw------- 1 root root 4096 Oct 18 07:08 enable lrwxrwxrwx 1 root root 0 Oct 18 07:08 firmware_node -> ../../../LNXSYSTM:00/device:00/PNP0A03:00/device:06/device:08 -r--r--r-- 1 root root 4096 Oct 17 07:50 irq -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpulist -r--r--r-- 1 root root 4096 Oct 18 07:08 local_cpus -r--r--r-- 1 root root 4096 Oct 17 07:53 modalias -rw-r--r-- 1 root root 4096 Oct 18 07:08 msi_bus drwxr-xr-x 3 root root 0 Oct 17 07:50 net -r--r--r-- 1 root root 4096 Oct 18 07:08 numa_node drwxr-xr-x 2 root root 0 Oct 18 07:08 power -r--r--r-- 1 root root 4096 Oct 17 07:50 resource -rw------- 1 root root 128 Oct 18 07:08 resource0 -r-------- 1 root root 65536 Oct 18 07:08 rom lrwxrwxrwx 1 root root 0 Oct 17 07:50 subsystem -> ../../../../bus/pci -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_device -r--r--r-- 1 root root 4096 Oct 17 07:51 subsystem_vendor -rw-r--r-- 1 root root 4096 Oct 17 07:51 uevent -r--r--r-- 1 root root 4096 Oct 17 07:50 vendor JBO:/sys/bus/pci/devices/0000:02:01.0 #
    The most interesting item for troubleshooting is the symbolic link to the driver directory. In this example it points to the pcnet32 driver and using the information that lspci provided, we know this is the correct driver.
    In most cases, the driver that Linux installs will work fine. In some cases it doesn't. When configuring a Dell server with a Broadcom network card, I have seen severe problems, where a ping command that used a jumbo frame packet was capable of causing kernel panic. One of the first things to suspect in that case, is the same kernel driver for the network card. A nice troubleshooting approach is to start by finding out which version of the driver you are using. You can accomplish this by using the modinfo command on the driver itself. Here is an example of modinfo on the pcnet32 driver:
    JBO:/ # modinfo pcnet32 filename: /lib/modules/2.6.27.19-5-pae/kernel/drivers/net/pcnet32.ko license: GPL description: Driver for PCnet32 and PCnetPCI based ethercards author: Thomas Bogendoerfer srcversion: 261B01C36AC94382ED8D984 alias: pci:v00001023d00002000sv*sd*bc02sc00i* alias: pci:v00001022d00002000sv*sd*bc*sc*i* alias: pci:v00001022d00002001sv*sd*bc*sc*i* depends: mii supported: yes vermagic: 2.6.27.19-5-pae SMP mod_unload modversions 586 parm: debug:pcnet32 debug level (int) parm: max_interrupt_work:pcnet32 maximum events handled per interrupt (int) parm: rx_copybreak:pcnet32 copy breakpoint for copy-only-tiny-frames (int) parm: tx_start_pt:pcnet32 transmit start point (0-3) (int) parm: pcnet32vlb:pcnet32 Vesa local bus (VLB) support (0/1) (int) parm: options:pcnet32 initial option setting(s) (0-15) (array of int) parm: full_duplex:pcnet32 full duplex setting(s) (1) (array of int) parm: homepna:pcnet32 mode for 79C978 cards (1 for HomePNA, 0 for Ethernet, default Ethernet (array of int)
    The modinfo command will give you different useful information for each module. If a version number is included, check for available updated versions and download and install them.
    When working with some hardware, you should also check what kind of module is used. If the module is open source, in general it's fine as open source modules are thoroughly checked by the Linux community. If the module is proprietary, there may be incompatibilities between the kernel and the particular module. If this is the case, your kernel is flagged as "tainted." A tainted kernel is a kernel that has some modules loaded that are not controlled by the Linux kernel community. To find out if this is the case on your system, you can check the contents of the /proc/sys/kernel/tainted file. If this file has a 0 as its contents, no proprietary modules are loaded. If it has a 1, proprietary modules are loaded and you may be able to fix the situation if you replace the proprietary module with an open source module.
    The information in this article should help you in fixing driver related issues.