Wednesday, December 14, 2011

New features of yum in RHEL-6.1 now that it's released

A few things you might not know about RHEL-6.1+ yum

  • Search is more user friendly

    As we maintain yum we are always looking for the "minor" changes that can make a big difference to the user, and this is probably one of the biggest minor changes. As of late RHEL-5 and RHEL-6.0 "yum search" was great for finding obscure things that you knew something about but with 6.1 we've hopefully made it useful for finding the "everyday" packages you can't remember the exact name of. We did this by excluding a lot of the "extra" hits, when you get a large search result. For instance "yum search kvm manager" is pretty useless in RHEL-6.0, but in RHEL-6.1 you should find what you want very quickly.
    Example commands:

    yum search kvm manager
    yum search python url
  • The updateinfo command The "yum-security" or "yum-plugin-security" package has been around since early RHEL-5, but the RHEL-6.1 update has introduced the "updateinfo" command to make things a little easier to use, and you can now easily view installed security errata (to more easily make sure you are secure). We've also added a few new pieces of data to the RHEL updateinfo data. Probably the most significant is that as well as errata being marked "security" or not they are now tagged with their "severity". So you can automatically apply only "critical" security updates, for example.
Example commands:

yum updateinfo list security all
yum update-minimal --sec-severity=critical

The versionlock command As with the previous point we've had "yum-plugin-version" for a long time, but now we've made it easier to use and put all it's functions under a single "versionlock" sub-command. You can now also "exclude" specific versions you don't want, instead of locking to known good specific ones you had tested.
Example commands:

# Lock to the version of yum currently installed.
yum versionlock add yum
# Opposite, disallow versions of yum currently available:
yum versionlock exclude yum
yum versionlock list
yum versionlock delete yum\*
yum versionlock clear
# This will show how many "excluded" packages are in each repo.
yum repolist -x .

Manage your own .repo variables This is actually available in RHEL-6.0, but given that almost nobody knows about it I thought I'd share it here. You can put files in "/etc/yum/vars" and then use the names of those files are variables in any yum configuration, just like $basearch or $releasever. There is also a special $uuid variable, so you can track individual machines if you want to.

yum has it's own DB
Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable addition is that we now store the "installed_by" and "changed_by" attributes, this could be worked out from "yum history" before, but now it's easily available directly from the installed package.
  • Example commands:
    yumdb info yum 
    yumdb set installonly keep kernel-2.6.32-71.7.1.el6 
    yumdb sync
  • Additional data in "yum history" Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable additions are that we now store the command line and we store a "transaction file" that you can use on other machines.
    Example commands:

    yum history
    yum history pkgs yum
    yum history summary
    yum history undo last
    yum history addon-info 1    config-main
    yum history addon-info last saved_tx
    "yum install" is now fully kickstart compatible As of RHEL-6.0 there was one thing you could do in a kickstart package list that you couldn't do in "yum install" and that was to "remove" packages with "-package". As of the RHEL-6.1 yum you can do that, and we also added that functionality to upgrade/downgrade/remove. Apart from anything else, this should make it very easy to turn the kickstart package list into "yum shell" files (which can even be run in kickstart's %post).
    Example commands:

     yum install 'config(postfix) >= 2.7.0'
     yum install MTA
     yum install '/usr/kerberos/sbin/*'
     yum -- install @books -javanotes
    Easier to change yum configuration We tended to get a lot of feature requests for a plugin to add a command line option so the user could change a single yum.conf variable, and we had to evaluate those requests for general distribution based on how much we thought all users would want/need them. With the RHEL-6.1 yum we created the --setopt so that any option can be changed easily, without having to create a specific bit of code. There were also some updates to the yum-config-manager command.
    Example commands:
    yum --setopt=alwaysprompt=false upgrade yum yum-config-manager yum-config-manager --enable myrepo yum-config-manager --add-repo
    Working towards managing 10 machines easily yum is the best way to manage a single machine, but it isn't quite as good at managing 10 identical machines. While the RHEL-6.1 yum still isn't great at this, we've made a few improvements that should help significantly. The biggest is probably the "load-ts" command, and the infrastructure around it, which allows you to easily create a transaction on one machine, test it, and then "deploy" it to a number of other machines. This is done with checking on the yum side that the machines started from the same place (via. rpmdb versions), so that you know you are doing the same operation.
    Also worth noting is that we have added a plugin hook to the "package verify" operation, allowing things like "puppet" to hook into the verification process. A prototype of what that should allow those kinds of tools to do was written by Seth Vidal here.
    Example commands:

    # Find the current rpmdb version for this machine (available in RHEL-6.0)
    yum version nogroups
    # Completely re-image a machine, or dump it's "package image"
    # This is the easiest way to get a transaction file without modifying the rpmdb
    echo | yum update blah
    ls ${TMPDIR:-/tmp}/yum_save_tx-* | sort | tail -1
    # You can now load a transaction and/or see the previous transaction from the history
    yum load-ts /tmp/yum_save_tx-2011-01-17-01-00ToIFXK.yumtx
    yum -q history addon-info last saved_tx > my-yum-saved-tx.yumtx


    Tuesday, November 22, 2011

    Linux filtering and transforming text - Command Line Reference

    View defined directives in a config file:

    grep . -v '^#' /etc/vsftpd/vsftpd.conf

    View a line matching “Initializing CPU” and 5 lines immediately after this match using 'grep' and 'sed'

    grep -A 5 "Initializing CPU#1" dmesg
    sed -n 101,110p /var/log/cron - Displays from Line 101 to 110 of the log file

    Exclude the empty lines:

    grep -v '^#' /etc/vsftpd/vsftpd.conf | grep .
    grep -v '^#' /etc/ssh/sshd_config | sed -e /^$/d
    grep -v '^#' /etc/ssh/sshd_config | awk /./{print}

    More examples of GREP :

    grep smug *.txt {search *.txt files for 'smug'}
    grep BOB tmpfile
    {search 'tmpfile' for 'BOB' anywhere in a line}
    grep -i -w blkptr *
    {search files in CWD for word blkptr, any case}
    grep run[- ]time *.txt
    {find 'run time' or 'run-time' in all txt files}
    who | grep root
    {pipe who to grep, look for root}
    grep smug files
    {search files for lines with 'smug'}
    grep '^smug' files
    {'smug' at the start of a line}
    grep 'smug files
    {'smug' at the end of a line}
    grep '^smug files
    {lines containing only 'smug'}
    grep '\^s' files
    {lines starting with '^s', "\" escapes the ^}
    grep '[Ss]mug' files
    {search for 'Smug' or 'smug'}
    grep 'B[oO][bB]' files 
    {search for BOB, Bob, BOb or BoB }
    grep '^ files
    {search for blank lines}
    grep '[0-9][0-9]' file
    {search for pairs of numeric digits}grep '^From: ' /usr/mail/$USER {list your mail}
    grep '[a-zA-Z]'
    {any line with at least one letter}
    grep '[^a-zA-Z0-9]
    {anything not a letter or number}
    grep '[0-9]\{3\}-[0-9]\{4\}'
    {999-9999, like phone numbers}
    grep '^.
    {lines with exactly one character}
    grep '"smug"'
    {'smug' within double quotes}
    grep '"*smug"*'
    {'smug', with or without quotes}
    grep '^\.'
    {any line that starts with a Period "."}
    grep '^\.[a-z][a-z]'
    {line start with "." and 2 lc letters}

    Grep command symbols used to search files:

    ^ (Caret) = match expression at the start of a line, as in ^A.
    $ (Question) = match expression at the end of a line, as in A$.
    \ (Back Slash) = turn off the special meaning of the next character, as in \^.
    [ ] (Brackets) = match any one of the enclosed characters, as in [aeiou].
    Use Hyphen "-" for a range, as in [0-9].
    [^ ] = match any one character except those enclosed in [ ], as in [^0-9].
    . (Period) = match a single character of any value, except end of line.
    * (Asterisk) = match zero or more of the preceding character or expression.
    \{x,y\} = match x to y occurrences of the preceding.
    \{x\} = match exactly x occurrences of the preceding.
    \{x,\} = match x or more occurrences of the preceding.

    Thursday, November 10, 2011

    Password policy rules in Linux

    Setting up stronger password policy rules in Linux

    Increased password security is no longer an optional item in setting up a secure system.  Many external organizations (such as PCI) are now mandating security policies that can have a direct effect on your systems.  By default, the account and password restrictions enabled on a Linux box are minimal at best.  To better secure your hosts and meet those requirements from external vendors and organizations, here’s a small how-to on setting up stronger password and account policies in Linux.  This is targeted at RHEL so other distributions may or may not be 100% compatible.

    As an example, let us assume that our security department has created an account security policy document.  This document identifies both account and password restrictions that are now going to be required for all accounts both existing and new.
    The document states that passwords must:
    • Be at least 8 characters long.
    • Use of at least one upper case character.
    • Use of at least one lower case character.
    • Use of at least one special character (!,@#$%, etc)
    • Warn 7 days prior to expiration.
    • Expire after 90 days 
    • Lock after 97 days.
    The good news is that Linux has all of these features and can be setup to meet the requirements given to us.  Unfortunately though, Linux doesn’t have all this information located in one central place. If you’re not using the RedHat supplied redhat-config-users GUI, you’re going to have to make the changes manually.  Since our server systems don’t run X, we will be making the changes directly to the system without the help of the GUI.
    In RHEL, changes are made in multiple locations.  They are:
    • /etc/pam.d/system-auth
    • /etc/login.defs
    • /etc/default/useradd
    In Linux, password changes are passed through PAM. To satisfy the first three requirements we must modify the PAM entry that corresponds with passwords. /etc/pam.d/system-auth is the PAM file responsible for authentication and where we will make our first modifications. Inside /etc/pam.d/system-auth there are entries based on a “type” that the rules apply to. As we are only discussing password rules, you will see a password type.

    password    requisite     /lib/security/$ISA/ retry=3
    The password type is “required for updating the authentication token associated with the user.” Simply put, we need a password type to update the password. Looking at the example, we can see that pam_cracklib is the default module that is responsible for this operation. To configure pam_cracklib to meet our specifications we need to modify the line accordingly:
    • Minimum of 8 characters: minlen=8
    • At least one upper case character: ucredit=-1
    • At least one lower case character: lcredit=-1
    • At least one special character: ocredit=-1
    Our /etc/pam.d/system-auth will now look like this:

    #password    requisite     /lib/security/$ISA/ retry=3
    password    requisite     /lib/security/$ISA/ retry=3 minlen=8 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1
    If you are curious as to how pam_cracklib defines and uses credits, check the link above to the pam_cracklib module page. Next, to meet the requirement to have passwords expire after 90 days, we need to modify the /etc/login.defs file.
    # Password aging controls:
    # # PASS_MAX_DAYS Maximum number of days a password may be used. # PASS_MIN_DAYS Minimum number of days allowed between password changes. # PASS_MIN_LEN Minimum acceptable password length. # PASS_WARN_AGE Number of days warning given before a password expires. # PASS_MAX_DAYS 90 PASS_MIN_DAYS 0 PASS_MIN_LEN 8 PASS_WARN_AGE 7
    Notice how the PASS_MIN_LEN is also set here as well. Since we have been given some latitude on when to warn users we have chosen to warn users seven days prior to expiration. But our last item is curiously missing. Where do we set up the accounts so that after 97 days the account is locked out and requires a system administrator to unlock?
    Believe it or not useradd controls the initial locking of an account. Issuing a useradd -D will show you the current default paramters that are used when useradd is invoked.

    [root@host ~]# useradd -D
    The INACTIVE=-1 entry defines when an account will be deactivated. Inactive is defined as the, “number of days after a password has expired before the account will be disabled.” Our requirements state that the account should be disabled seven days after account expiration. To set this we can either:

    • Invoke useradd -D -f 7
    • Modify /etc/default/useradd and change the INACTIVE entry.
    Just remember that an inactive or disabled account is a locked account whereas an expired account is not locked. With this last change, all of the requirements that have been given to us have been met. We modified the password rules for all new passwords, and setup the system to activate password aging as well as configure the system to disable an account if necessary. But one issue remains — if this is not a new system what happens to all existing account? The answer is nothing.
    In the next installment I’ll show you how to make our modifications effective on existing user accounts…

    Monday, October 24, 2011

    How To Disable SSH Host Key Checking

    Remote login using the SSH protocol is a frequent activity in today's internet world. With the SSH protocol, the onus is on the SSH client to verify the identity of the host to which it is connecting. The host identify is established by its SSH host key. Typically, the host key is auto-created during initial SSH installation setup.

    By default, the SSH client verifies the host key against a local file containing known, rustworthy machines. This provides protection against possible Man-In-The-Middle attacks. However, there are situations in which you want to bypass this verification step. This article explains how to disable host key checking using OpenSSH, a popular Free and Open-Source implementation of SSH.

    When you login to a remote host for the first time, the remote host's host key is most likely unknown to the SSH client. The default behavior is to ask the user to confirm the fingerprint of the host key.
    $ ssh peter@
    The authenticity of host ' (' can't be established.
    RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Are you sure you want to continue connecting (yes/no)? 

    If your answer is yes, the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts. You only need to validate the host key the first time around: in subsequent logins, you will not be prompted to confirm it again.

    Yet, from time to time, when you try to remote login to the same host from the same origin, you may be refused with the following warning message:
    $ ssh peter@
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    Please contact your system administrator.
    Add correct host key in /home/peter/.ssh/known_hosts to get rid of this message.
    Offending key in /home/peter/.ssh/known_hosts:3
    RSA host key for has changed and you have requested strict checking.
    Host key verification failed.$

    There are multiple possible reasons why the remote host key changed. A Man-in-the-Middle attack is only one possible reason. Other possible reasons include:
    • OpenSSH was re-installed on the remote host but, for whatever reason, the original host key was not restored.
    • The remote host was replaced legitimately by another machine.

    If you are sure that this is harmless, you can use either 1 of 2 methods below to trick openSSH to let you login. But be warned that you have become vulnerable to man-in-the-middle attacks.

    The first method is to remove the remote host from the~/.ssh/known_hosts file. Note that the warning message already tells you the line number in the known_hosts file that corresponds to the target remote host. The offending line in the above example is line 3("Offending key in /home/peter/.ssh/known_hosts:3")

    You can use the following one liner to remove that one line (line 3) from the file.
    $ sed -i 3d ~/.ssh/known_hosts

    Note that with the above method, you will be prompted to confirm the host key fingerprint when you run ssh to login.

    The second method uses two openSSH parameters:
    • StrictHostKeyCheckin, and
    • UserKnownHostsFile.

    This method tricks SSH by configuring it to use an emptyknown_hosts file, and NOT to ask you to confirm the remote host identity key.
    $ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no peter@
    Warning: Permanently added '' (RSA) to the list of known hosts.
    peter@'s password:

    The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).

    The /dev/null file is a special system device file that discards anything and everything written to it, and when used as the input file, returns End Of File immediately.

    By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.

    The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection. Because of the null key database file, all connection is viewed as the first-time for any SSH server host. Therefore, the host key is automatically added to the host key database with no user confirmation. Writing the key to the/dev/null file discards the key and reports success.

    Please refer to this excellent article about host keys and key checking.

    By specifying the above 2 SSH options on the command line, you can bypass host key checking for that particular SSH login. If you want to bypass host key checking on a permanent basis, you need to specify those same options in the SSH configuration file.

    You can edit the global SSH configuration file (/etc/ssh/ssh_config) if you want to make the changes permanent for all users.

    If you want to target a particular user, modify the user-specific SSH configuration file (~/.ssh/config). The instructions below apply to both files.

    Suppose you want to bypass key checking for a particular subnet (

    Add the following lines to the beginning of the SSH configuration file.
    Host 192.168.0.*
       StrictHostKeyChecking no

    Note that the configuration file should have a line like Host * followed by one or more parameter-value pairs. Host *means that it will match any host. Essentially, the parameters following Host * are the general defaults. Because the first matched value for each SSH parameter is used, you want to add the host-specific or subnet-specific parameters to the beginning of the file.

    As a final word of caution, unless you know what you are doing, it is probably best to bypass key checking on a case by case basis, rather than making blanket permanent changes to the SSH configuration files.

    Refer & Thanks to:

    Tuesday, October 11, 2011

    History of Virtualization

    When you think of the beginning of Server Virtualization, companies like VMWare may come to mind. The thing you may not realize is Server Virtualization actually started back in the early 1960’s and was pioneered by companies like General Electric (GE), Bell Labs, and International Business Machines (IBM).

    The Invention of the Virtual Machine

    In the Early 1960’s IBM had a wide range of systems; each generation of which was substantially different from the previous. This made it difficult for customers to keep up with the changes and requirements of each new system. Also, computers could only do one thing at a time. If you had two tasks to accomplish, you had to run the processes in batches. This Batch processing requirement wasn’t too big of a deal to IBM since most of their users were in the Scientific Community and up until this time Batch processing seemed to have met the customers needs.

    Because of the wide range of hardware requirements, IBM began work on the S/360 mainframe system designed as a broad replacement for many of their other systems; and designed to maintain backwards compatibility. When the system was first designed, it was meant to be a single user system to run Batch Jobs.

    However, this focus began to change in July 1, 1963 when Massachusetts Institute of Technology (MIT) announced Project MAC. Project MAC stood for Mathematics and Computation, but was later renamed to Multiple Access Computer. Project MAC was funded by a $2 Million grant from DARPA to fund research into computers, specifically in the areas of Operating Systems, Artificial Intelligence, and Computational Theory.

    As part of this research grant, MIT needed new computer hardware capable of more than one simultaneous user and sought proposals from various computer vendors including GE and IBM. At this time, IBM was not willing to make a commitment towards a time sharing computer because they did not feel there was a big enough demand, and MIT did not want to have to use a specially modified system. GE on the other hand, was willing to make a commitment towards a time-sharing computer. For this reason MIT chose GE as their vendor of choice.

    The loss of this opportunity was a bit of a wake-up call for IBM who then started to take notice as to the demand for such a system. Especially when IBM heard of Bell Labs’ need for a similar system.

    In response to the need from MIT and Bell Labs, IBM designed the CP-40 main frame. The CP-40 was never sold to customers, and was only used in labs. However, it is still importatnt since the CP-40 later evolved into the CP-67 system; which is the first commercial Main Frame to support Virtualization. The Operating system which ran on the CP-67 was referred to as CP/CMS. CP Stands for Control Program, CMS stands for Console Monitor System. CMS was a small single-user operating system designed to be interactive. CP was the program which created Virtual Machines. The idea was the CP ran on the Mainframe, and created Virtual Machines which ran the CMS; which the user would then interact with.

    The User interaction portion is important. Before this system, IBM focused on systems where there was no user interaction. You would feed your program into the computer, it would do it’s thing; then spit out the output to a printer or a screen. An Interactive Operating System meant you actually had a way of interacting with the programs while they ran.

    The first version of the CP/CMS operating system was known as CP-40, but was only used in the lab. The Initial release of CP/CMS to the public was in 1968, the first stable release wasn’t until 1972.

    The traditional approach for a time sharing computer was to divide up the memory and other system resources between users. An Example of a time sharing operating system from the era is MultiCS. MultiCS was created as part of Project MAC at MIT. Additional research and development was performed on MultiCS at Bell Labs, where it later evolved into Unix.

    The CP approach to time sharing allowed each user to have their own complete operating system which effectively gave each user their own computer, and the operating was much more simple.

    The main advantages of using virtual machines vs a time sharing operating system was more efficient use of the system since virtual machines were able to share the overall resources of the mainframe, instead of having the resources split equally between all users. There was better security since each users was running in a completely separate operating system. And it was more reliable since no one user could crash the entire system; only their own operating system.

    Portability of Software

    In the previous section, I mentioned MultiCS and how it evolved into Unix. While UNIX is not running virtualized operating systems, it is still a good example of application from another perspective. Unix is not the first multi-user operating system, but it is a very good example of one, and is one of the most widely used ever.

    Unix is an example of Virtualization at the User or Workspace Level. Multiple users share the same CPU, Memory, Hard Disk, etc... pool of resources, but each have their own profile, separate from the other users on the system. Depending on the way the system is configured, the user may be able to install their own set of applications, and security is handled on a per user basis. Not only was Unix the first step towards multi-user operating systems, but it was also the first step towards application virtualization.

    Unix is not an example of application virtualization, but it did allow users much greater portability of their applications. Prior to Unix, almost all operating systems were coded in assembly language. Alternatively, Unix was created using the C programming language. Since Unix was written in C, only small parts of the operating system had to be customized for a given hardware platform, the rest of the operating system could easily be re-compiled for each hardware platform with little or no changes.

    Application Virtualization

    Through the use of Unix, and C compilers, and adept user could run just about any program on any platform, but it still required users to compile all the software on the platform they wished to run on. For true portability of software, you needed some sort of software virtualization.

    In 1990, Sun Microsystems began a project known as “Stealth”. Stealth was a project run by Engineers who had become frustrated with Sun’s use of C/C++ API’s and felt there was a better way to write and run applications. Over the next several years the project was renamed several times, including names such as Oak, Web Runner, and finally in 1995, the project was renamed to Java.

    In 1994 Java was targeted towards the Worldwide web since Sun saw this as a major growth opportunity. The Internet is a large network of computers running on different operating systems and at the time had no way of running rich applications universally, Java was the answer to this problem. In January 1996. the Java Development Kit (JDK) was released, allowing developers to write applications for the Java Platform.

    At the time, there was no other language like Java. Java allowed you to write an application once, then run the application on any computer with the Java Run-time Environment (JRE) installed. The JRE was and still is a free application you can download from then Sun Micro-systems website, now Oracle's website.

    Java works by compiling the application into something known as Java Byte Code. Java Byte Code is an intermediate language that can only be read by the JRE. Java uses a concept known as Just in Time compilation (JIT). At the time you write your program, your Java code is not compiled. Instead, it is converted into Java Byte Code, until just before the program is executed. This is similar to the way Unix revolutionized Operating systems through it’s use of the C programming language. Since the JRE compiles the software just before running, the developer does not need to worry about what operating system or hardware platform the end user will run the application on; and the user does not need to know how to compile a program, that is handled by the JRE..
    The JRE is composed of many components, most important of which is the Java Virtual Machine. . Whenever a java application is run, it is run inside of the Java Virtual Machine. You can think of the Java Virtual Machine is a very small operating system, created with the sole purpose of running your Java application. Since Sun/Oracle goes through the trouble of porting the Java Virtual Machine to run on various systems from your cellular phone to the servers in your Data-center, you don’t have to. You can write the application once, and run anywhere. At least that is the idea; there are some limitations.

    Mainstream Adoption of Hardware Virtualization

    As was covered in the Invention of the Virtual Machine section, IBM was the first to bring the concept of Virtual Machines to the commercial environment. Virtual Machines as they were on IBM’s Mainframes are still in use today, however most companies don’t use mainframes.
    In January of 1987, Insignia Solutions demonstrated a software emulator called SoftPC. SoftPC allowed users to run Dos applications on their Unix workstations. This is a feat that had never been possible before. At the time, a PC capable of running MS DOS cost around $1,500. SoftPC gave users with a Unix workstation the ability to run DOS applications for a mere $500.
    By 1989, Insignia Solutions had released a Mac version of SoftPC, giving Mac users the same capabilities; and had added the ability to run Windows applications, not Just DOS applications. By 1994, Insignia Solutions began selling their software packaged with operating systems pre-loaded, including: SoftWindows, and SoftOS/2.
    Inspired by the success of SoftPC, other companies began to spring up. In 1997, Apple created a program called Virtual PC and sold it through a company called Connectix. Virtual PC, like SoftPC allowed users to run a copy of windows on the Mac computer, in order to work around software incompatibilities. In 1998, a company called VMWare was established, and in 1999 began selling a product similar to Virtual PC called VMWare workstation. Initial versions of VMWare workstation only ran on windows; but later added support for other operating systems.
    I mention VMWare because they are really the market leader in Virtualization in today's market. In 2001, VMWare released two new products as they branched into the enterprise market, ESX Server and GSX Server. GSX Server allowed users to run virtual machines on top of an existing operating system, such as Microsoft Windows, this is known as a Type-2 Hypervisor. ESX Server is known as a Type-1 Hypervisor, and does not require a host operating system to run Virtual Machines.
    A Type-1 Hypervisor is much more efficient than a Type-2 hypervisor since it can be better optimized for virtualization, and does not require all the resources it takes to run a traditional operating system.
    Since releasing ESX Server in 2001, VMWare has seen exponential growth in the enterprise market; and has added many complimentary products to enhance ESX Server. Other vendors have since entered the market. Microsoft acquired Connectix in 2003, after which they re-released Virtual PC as Microsoft Virtual PC 2004, then Microsoft Virtual Server 2005, both of which were un-released products from Connectix at the time Microsoft acquired them.
    Citrix Inc, entered the Virtualization market in 2007 when they acquired Xensource, an open source virtualization platform which started in 2003. Citrix soon thereafter renamed the product to Xenserver.

    Published Applications

    In the early days of UNIX, you could access published applications via a Telnet Interface; and later SSH. Telnet is a small program allowing you to remotely access another computer. SSH is a version of telnet including various features such as encryption.
    Telnet/SSH allows you to access either a text interface, or a Graphical interface, although it is not really optimized for graphics. Using telnet, you can access much of the functionality of the given server, from almost anywhere.
    Windows and OS/2 had no manner of remotely accessing applications without third party tools. And the third party tools available only allowed one user at a time.
    Some Engineers at IBM had an idea to create a multi-user interface for OS/2, however IBM did not share the same vision. So in 1989 Ed Lacobucci left IBM and started his own company called Citrus. Due to an existing trademark, the company was quickly re-branded as Citrix, a combination of Citrus and Unix.
    Citrix licenced the source code to OS/2 through Microsoft and began working on creating their extension to OS/2. The company operated for two years and created a Multi-User interface for OS/2 called MULTIUSER. However Citrix was forced to abandon the project in 1991 after Microsoft announced it was no longer going to support OS/2. At that point, Citrix licensed source code from Microsoft and began working on a similar product focused on Windows.
    In 1993 Citrix Acquired Netware Access Server from Novell. This product was similar to what Citrix had accomplished for OS/2 in that it gave multiple users access to a single system. Citrix Licensed the Windows NT source code in from Microsoft, then in 1995 began selling a product called WinFrame. WinFrame was a version of Windows NT 3.5 with remote access capabilities; allowing multiple users to access the system at the same time in order to remotely run applications.
    While developing WinFrame for Windows NT 4.0, Microsoft decided to no longer grant the necessary licenses to Citrix. At this point Citrix licensed WinFrame to Microsoft, and it was included with Windows NT 4.0 as Terminal Services. As part of this agreement, Citrix agreed not to create a competing product, but was allowed to extend the functionality of Terminal Services.

    Virtual Desktops

    Virtual Desktop Infrastructures (VDI) is the practice of running a users Desktop Operating system, such as Windows XP within a virtual machine on a centralized infrastructure. Virtual Desktop Computers as we think of them today are a fairly new topic of conversation. But are very similar to the idea IBM had back in the 1960’s with the virtual machines on their mainframe computers. You give each user on the system their own operating system, then each user can then do as the please without disrupting anyother users on the system. Each user has their own computer, it is centralized, and it is a very efficient use of resources.
    If you compare MultiCS from back in the 1960’s to the IBM Mainframes, it would be similar to comparing a Microsoft Terminal Server to a Virtual Desktop infrastructure today.
    The jump from Virtual Desktops on Mainframes to Virtual Desktops as we know them today didn’t really happen until 2007 when VMWare introduced their VDI product. Prior to this release, it was possible for users in a company to use virtual desktops as their primary computers. However, it wasn’t really a viable solution due to management headaches. The introduction of Virtual Machine Manager from VMWare, and similar products from companies like Microsoft and Citrix has allowed this area to grow very rapidly.


    Computer Virtualization has a long history, spanning nearly half a century. It can be used for making your applications easier to access remotely, allowing your applications to run on more systems than originally intended, improving stability, and more efficient use of resources.

    Some technologies can be traced back to the 60’s such as Virtual Desktops, others can only be traced back a few years, such as virtualized applications.

    Ubuntu Enterprise Cloud (UEC) : How to

    Grow Your Own Cloud Servers With Ubuntu

    Have you been wanting to fly to the cloud, to experiment with cloud computing? Now is your chance. With this article, we will step through the process of setting up a private cloud system using Ubuntu Enterprise Cloud (UEC), which is powered by the Eucalyptus platform.
    The system is made up of one cloud controller (also called a front-end server) and one or more node controllers. The cloud controller manages the cloud environment. You can install the default Ubuntu OS images or create your own to be virtualized. The node controllers are where you can run the virtual machine (VM) instances of the images.

    System Requirements

    At least two computers must be dedicated to this cloud for it to work:
    • One for the front-end server (cloud or cluster controller) with a minimum 1GHz CPU, 512MB of memory, CD-ROM, 40GB of disk space, and an Ethernet network adapter
    • One or more for the node controller(s) with a CPU that supports Virtualization Technology (VT) extensions, 1GB of memory, CD-ROM, 40GB of disk space and an Ethernet network adapter
    You might want to reference a list of Intel processors that include VT extensions. Optionally, you can run a utility, called SecurAble, in Windows. You can also check in Linux if a computer supports VT by seeing if "vmx" or "svm" is listed in the /proc/cpuinfo file. Run the command: egrep '(vmx|svm)' /proc/cpuinfo. Bear in mind, however, this tells you only if it's supported; the BIOS could still be set to disable it.

    Preparing for the Installation

    First, download the CD image for the Ubuntu Server remix — we're using version 9.10 — on any PC with a CD or DVD burner. Then burn the ISO image to a CD or DVD. If you want to use a DVD, make sure the computers that will be in the cloud read DVDs. If you're using Windows 7, you can open the ISO file and use the native burning utility. If you're using Windows Vista or later, you can download a third-party application like DoISO.
    Before starting the installation, make sure the computers involved are setup with the peripherals they need (i.e., monitor, keyboard and mouse). Plus, make sure they're plugged into the network so they'll automatically configure their network connections.

    Installing the Front-End Server

    The installation of the front-end server is straightforward. To begin, simply insert the install CD, and on the boot menu select "Install Ubuntu Enterprise Cloud", and hit Enter. Configure the language and keyboard settings as needed. When prompted, configure the network settings.
    When prompted for the Cloud Installation Mode, hit Enter to choose the default option, "Cluster". Then you'll have to configure the Time Zone and Partition settings. After partitioning, the installation will finally start. At the end, you'll be prompted to create a user account.
    Next, you'll configure settings for proxy, automatic updates and email. Plus, you'll define a Eucalyptus Cluster name. You'll also set the IP addressing information, so users will receive dynamically assigned addresses.

    Installing and Registering the Node Controller(s)

    The Node installation is even easier. Again, insert the install disc, select "Install Ubuntu Enterprise Cloud" from the boot menu, and hit Enter. Configure the general settings as needed.
    When prompted for the Cloud Installation Mode, the installer should automatically detect the existing cluster and preselect "Node." Just hit Enter to continue. The partitioning settings should be the last configuration needed.

    Registering the Node Controller(s)

    Before you can proceed, you must know the IP address of the node(s). To check from the command line:
    Then, you must install the front-end server's public ssh key onto the node controller:
    1. On the node controller, set a temporary password for the eucalyptus user using the command:
      sudo passwd eucalyptus
    2. On the front-end server, enter the following command to copy the SSH key:
      sudo -u eucalyptus ssh-copy-id -i ~eucalyptus/.ssh/ eucalyptus@
    3. Then you can remove the eucalyptus account password from the node with the command:
      sudo passwd -d eucalyptus
    4. After the nodes are up and the key copied, run this command from the front-end server to discover and add the nodes:
      sudo euca_conf --no-rsync --discover-nodes

    Getting and Installing User Credentials

    Enter these commands on the front-end server to create a new folder, export the zipped user credentials to it, and then to unpack the files:
    mkdir -p ~/.euca
    chmod 700 ~/.euca
    cd ~/.euca
    sudo euca_conf --get-credentials (It takes a while for this to complete; just wait)
    cd -
    The user credentials are also available via the web-based configuration utility; however, it would take more work to download the credentials there and move them to the server.

    Setting Up the EC2 API and AMI Tools

    Now you must setup the EC2 API and AMI tools on your front-end server. First, source the eucarc file to set up your Eucalyptus environment by entering:
    For this to be done automatically when you login, enter the following command to add that command to your ~/.bashrc file:
    echo "[ -r ~/.euca/eucarc ] && . ~/.euca/eucarc" >> ~/.bashrc
    Now to install the cloud user tools, enter:
    sudo apt-get install ^31vmx32^4
    To make sure it's all working, enter the following to display the cluster availability details:
    . ~/.euca/eucarc
    euca-describe-availability-zones verbose

    Accessing the Web-Based Control Panel

    Now you can access the web-based configuration utility. From any PC on the same network, go to the URL, https://:8443. The IP address of the cloud controller is displayed just after logging onto the front-end server. Note that that is a secure connection using HTTPS instead of just HTTP. You'll probably receive a security warning from the web browser since the server uses a self-signed certificate instead of one handled out by a known Certificate Authority (CA). Ignore the alert by adding an exception. The connection will still be secure.
    The default login credentials are "admin" for both the Username and Password. The first time logging in you'll be prompted to setup a new password and email.

    Installing images

    Now that you have the basic cloud set up, you can install images. Bring up the web-based control panel, click the Store tab, and click the Install button for the desired image. It will start downloading, and then it will automatically install, which takes a long time to complete.

    Running images

    Before running an image on a node for the first time, run these commands to create a keypair for SSH:
    touch ~/.euca/mykey.priv
    chmod 0600 ~/.euca/mykey.priv
    euca-add-keypair mykey > ~/.euca/mykey.priv
    You also need to open port 22 up on the node, using the following commands:
    euca-authorize default -P tcp -p 22 -s
    Finally, you can run your registered image. The command to run it is available via the web interface. Login to the web interface, click the Store tab, and select the How to Run link for the desired image. It will display a popup with the exact command.
    The first time you run an instance, it will likely take a while for the image to be cached. You can get the status of your instance by running the command:
    watch -n5 euca-describe-instances
    Once it moves from "pending" to "running", reference the assigned IP address and connect to it:
    IPADDR=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $4}')
    ssh -i ~/.euca/mykey.priv ubuntu@$IPADDR
    To terminate the SSH connection for the instance:
    INSTANCEID=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $2}')
    euca-terminate-instances $INSTANCEID

    Maintaining the cloud

    Now you should have a working cloud on your network. If you run into problems, you might have to reference the official documentation or hit the message boards. Before I leave, here are a few final tips:
    • To restart the front-end server run: sudo service eucalyptus [start|stop|restart]
    • To fresh a node run: sudo service eucalyptus-nc [start|stop|restart]
    • Here are some key file locations:
      • Log files
      • Configuration files
      • Database
      • Keys
    Eric Geier is the Founder and CEO of NoWiresSecurity, which helps businesses easily protect their Wi-Fi with enterprise-level encryption by offering an outsourced RADIUS/802.1X authentication service. He is also the author of many networking and computing books for brands like For Dummies and Cisco Press.

    Thursday, October 6, 2011

    Setup of VSFTPD virtual users

    If you are hosting several web sites, for security reason, you may want the webmasters to access their own files only. One of the good way is to give them FTP access by setup of VSFTPD virtual users and directories. This article describes how you can do that easily.
    (See also: Setup of VSFTPD virtual users – another approach)
    1. Installation of VSFTPD
    For Red Hat, CentOS and Fedora, you may install VSFTPD by the command
    # yum install vsftpd
    For Debian and Ubuntu,
    # apt-get install vsftpd
    2. Virtual users and authentication
    We are going to use pam_userdb to authenticate the virtual users. This needs a username / password file in `db’ format – a common database format. We need `db_load’ program. For CentOS, Fedora, you may install the package `db4-utils’:
    # yum install db4-utils
    For Ubuntu,
    # apt-get install db4.2-util
    To create a `db’ format file, first create a plain text file `virtual-users.txt’ with the usernames and passwords on alternating lines:
    Then execute the following command to create the actual database:
    # db_load -T -t hash -f virtual-users.txt /etc/vsftpd/virtual-users.db
    Now, create a PAM file /etc/pam.d/vsftpd-virtual which uses your database:
    auth required db=/etc/vsftpd/virtual-users
    account required db=/etc/vsftpd/virtual-users
    3. Configuration of VSFTPD
    Create a configuration file /etc/vsftpd/vsftpd-virtual.conf,
    # disables anonymous FTP
    # enables non-anonymous FTP
    # activates virtual users
    # virtual users to use local privs, not anon privs
    # enables uploads and new directories
    # the PAM file used by authentication of virtual uses
    # in conjunction with 'local_root',
    # specifies a home directory for each virtual user
    # the virtual user is restricted to the virtual FTP area
    # hides the FTP server user IDs and just display "ftp" in directory listings
    # runs vsftpd in standalone mode
    # listens on this port for incoming FTP connections
    # the minimum port to allocate for PASV style data connections
    # the maximum port to allocate for PASV style data connections
    # controls whether PORT style data connections use port 20 (ftp-data)
    # the umask for file creation
    4. Creation of home directories
    Create each user’s home directory in /var/www/virtual, and change the owner of the directory to the user `ftp’:
    # mkdir /var/www/virtual/mary
    # chown ftp:ftp /var/www/virtual/mary
    5. Startup of VSFTPD and test
    Now we can start VSFTPD by the command:
    # /usr/sbin/vsftpd /etc/vsftpd/vsftpd-virtual.conf
    and test the FTP access of a virtual user:
    # lftp -u mary -p 60021
    The virtual user should have full access to his directory.

    Wednesday, October 5, 2011

    Hints on how to check your machine for intrusion

    The compromise of and related machines has made it clear that
    some developers, at least, have had their systems penetrated.  As we
    seek to secure our infrastructure, it is imperative that nobody falls
    victim to the belief that it cannot happen to them.  We all need to
    check our systems for intrusions.  Here are some helpful hints as
    proposed by a number of developers on how to check to see if your Linux
    machine might be infected with something:
    0. One way to be sure that your system is not compromised is to simply
       do a clean install; we can all benefit from a new start sometimes.
       Before reinstalling any systems, though, consider following the steps
       below to learn if your system has been hit or not.
    1. Install the chkrootkit package from your distro repository and see if it
       reports anything.  If your distro doesn't have the chkroot package,
       download it from:
       Another tool is the ossec-rootcheck tool which can be found at:
       And another one is the rkhunter program:
       [Note, this tool has the tendancy to give false-positives on some
       Debian boxes, please read /usr/share/doc/rkhunter/README.Debian.gz if
       you run this on a Debian machine]
    2. Verify that your package signatures match what your package manager thinks
       they are.
       To do this on a rpm-based system, run the following command:
        rpm --verify --all
       Please read the rpm man page for information on how to interpret the
       output of this command.
       To do this on a Debian based system, run the following bash snippet:
     dpkg -l \*|while read s n rest; do if [ "$s" == "ii" ]; then echo $n;
     fi; done > ~/tmp.txt
     for f in `cat ~/tmp.txt`; do debsums -s -a $f; done
       If you have a source-based system (Gentoo, LFS, etc.) you presumably
       know what you are doing already.
    3. Verify that your packages are really signed with the distro's keys.
       Here's a bash snippet that can do this on a rpm based system to
       verify that the packages are signed with any key, not necessarily
       your distro's key.  That exercise is left for the reader:
     for package in `rpm -qa`; do
      sig=`rpm -q --qf '%{SIGPGP:pgpsig}\n' $package`
      if [ -z "$sig" ] ; then
       # check if there is a GPG key, not a PGP one
       sig=`rpm -q --qf '%{SIGGPG:pgpsig}\n' $package`
       if [ -z "$sig" ] ; then
        echo "$package does not have a signature!!!"
       Unfortunately there is no known way of verifying this on Debian-based
    4. To replace a package that you find suspect, uninstall it and install
       it anew from your distro.  For example, if you want to reinstall the
       ssh daemon, you would do:
     $ /etc/init.d/sshd stop
     rpm -e openssh
     zypper install openssh # for openSUSE based systems
     yum install openssh # for Fedora based systems
       Ideally do this from a live cdrom boot, using the 'rpm --root' option
       to point rpm at the correct location.
    5. From a liveCD environment, look for traces such as:
       a. Rogue startup scripts in /etc/rc*.d and equivalent directories.
       b. Strange directories in /usr/share that do not belong to a package.
          This can be checked on an rpm system with the following bash snippet:
     for file in `find /usr/share/`; do
      package=`rpm -qf -- ${file} | grep "is not owned"`
      if [ -n "$package" ] ; then
       echo "weird file ${file}, please check this out"
    6. Look for mysterious log messages, such as:
       a. Unexpected logins in wtmp and /var/log/secure*, quite possibly
          from legitimate users from unexpected hosts.
       b. Any program trying to touch /dev/mem.
       c. References to strange (non-text) ssh version strings in
          /var/log/secure*.  These do not necessarily indicate *successful*
          breakins, but they indicate *attempted* breakins which means your
          system or IP address has been targeted.
    7. If any of the above steps show possible signs of compromise, you
       should investigate further and identify the actual cause.  If it
       becomes clear that the system has indeed been compromised, you should
       certainly reinstall the system from the beginning, and change your
       credentials on all machines that this machine would have had access
       to, or which you connected to through this machine.  You will need
       to check your other systems carefully, and you should almost
       certainly notify the administrators of other systems to which you
       have access.
    Finally, please note that these hints are not guaranteed to turn up
    signs of a compromised systems.  There are a lot of attackers out there;
    some of them are rather more sophisticated than others.  You should
    always be on the alert for any sort of unexpected behavior from the
    systems you work with.
    I would like to add here a few controls I ran on firewall and system logs,
    that are easy to perform and which report few false positives :
      - check that communications between your local machines are expected ;
        for instance if you have an SSH bouncing machine, it probably receives
        tens of thousands of SSH connection attempts from outside every day,
        but it should never ever attempt to connect to another machine unless
        it's you who are doing it. So checking the firewall logs for SSH
        connections on port 22 from local machines should only report your
        activity (and nothing should happen when you sleep).
      - no SSH log should report failed connection attempts between your
        local machines (you do have your keys and remember your password).
        And if it happens from time to time (eg: user mismatch between
        machines), it should look normal to you. You should never observe
        a connection attempt for a user you're not familiar with (eg: admin).
         $ grep sshd /var/log/messages
         $ grep sshd /var/log/messages | grep 'Invalid user'
      - outgoing connections from your laptop, desktop or anything should
        never happen when you're not there, unless there is a well known
        reason (package updates, browser left open and refreshing ads). All
        unexpected activity should be analysed (eg: connections to port 80
        not coming from a browser should only match one distro mirror).
        This is particularly true for cheap appliances which become more
        and more common and are rarely secured. A NAS or media server, a
        switch, a WiFi router, etc... has no reason to ever connect anywhere
        without you being aware of it (eg: download a firmware update).
      - check for suspicious DNS requests from machines that are normally
        not accessed. A number of services perform DNS requests when
        connected to, in order to log a resolved address. If the machine
        was penetrated and the logs wiped, the DNS requests will probably
        still lie in the firewall logs. While there's nothing suspect from
        a machine that does tens of thousands DNS requests a day, one that
        does 10 might be suspect.
      - check for outgoing SMTP connections. Most machines probably never
        send any mail outside or route them through a specific relay. If
        one machine suddenly tries to send mails directly to the outside,
        it might be someone trying to steal some data (eg: mail ssh keys).
      - check for long holes in logs various service logs. The idea is that
        if a system was penetrated and the guy notices he left a number of
        traces, he will probably have wiped some logs. A simple way to check
        for this is to count the number of events per hour and observe huge
        variations. Eg:
           $ cut -c1-9 < /var/log/syslog |uniq -c
           8490 Oct  1 00
           7712 Oct  1 01
           8316 Oct  1 02
           6743 Oct  1 03
           7428 Oct  1 04
           7041 Oct  1 05
           7762 Oct  1 06
           6562 Oct  1 07
           7137 Oct  1 08
            160 Oct  1 09
        Activity looks normal here. Something like this however would be
        extremely suspect :
           8490 Oct  1 00
            712 Oct  1 01
           6743 Oct  1 03
      - check that you never observe in logs a local address that you
        don't know. For instance, if your reverse proxy is on a DMZ which
        is provided by the same physical switch as your LAN and your switch
        becomes ill and loses all its VLAN configuration, it them becomes
        easy to add an alias to the reverse-proxy to connect directly to
        LAN machines and bypass a firewall (and its logs).
      - it's always a good exercise to check for setuids on all your machines.
        You'll generally discover a number of things you did not even suspect
        existed and will likely want to remove them. For instance, my file
        server had dbus-daemon-launch-helper setuid root. I removed this crap
        as dbus has nothing to do on such a machine. Similarly I don't need
        fdmount to mount floppies. I might not use floppies often, and if I do,
        I know how to use sudo.
           $ find / -user root -perm -4000 -ls
      - last considerations to keep in mind is that machines which receive
        incoming connections from outside should never be able to go out, and
        should be isolated in their own LAN. It's not hard to do at all, and
        it massively limits the ability to bounce between systems and to steal
        information. It also makes firewall logs much more meaningful, provided
        they are stored on a support with limited access, of course :-)
    Also refer: 

    Thursday, June 9, 2011

    IPV6 - Chapter 3 ICMPv6


    This white paper discusses ICMPv6 and describes the types of ICMPv6 messages.

    Introducing ICMPv6

    Internet Control Message Protocol (ICMP) is communication method for reporting packet-handling errors. ICMP for IPv6 (ICMPv6) is the latest version of ICMP. All IPv6 nodes must conduct ICMPv6 error reporting.
    ICMPv6 can be used to analyze intranet communication routes and multicast addresses. It incorporates operations from the Internet Group Management Protocol (IGMP) for reporting errors on multicast transmissions, and ICMPv6 packets are used in the IGMP extension Multicast Listener Discovery (MLD) protocol to locate linked multicast nodes. ICMPv6 is also used for operations such as packet Internet groper (ping), traceroute, and Neighbor Discovery.

    ICMPv6 message types

    Like IPv6, ICMPv6 is a network layer protocol. However, IPv6 sees ICMPv6 as an upper layer protocol because it sends its messages inside IP datagrams. The two types of ICMPv6 message are
    • error messages
    • information messages

    ICMPv6 error messages

    The ICMPv6 error messages notify the source node of a transmission error. This enables the packet's originator to implement a solution to the reported error and attempt successful transmission. If the type of error message received is unknown, the message is transferred to an upper layer protocol for processing. The type of message is identified with type values ranging from 1 to 127.
    Types of packet transmission error messages include
    • Destination Unreachable
    • Parameter Problem
    • Packet Too Big
    • Time Exceeded

    Destination Unreachable

    A router will communicate a Destination Unreachable message to the source address when a message cannot be delivered due to a cause other than congested network paths. The Destination Unreachable message signals the reason for delivery failure using one of five codes.
    Table 1: Destination Unreachable message codes, labels, and causes
    Error message code Error message label Cause of message
    0 No route to destination A router without a default route to the destination address generates this message.
    1 Communication with destination administratively prohibited A packet-filtering firewall generates this message when a packet is denied access to a host behind a firewall.
    2 Not a neighbor This error message is sent when the forwarding node does not share a network link with the next node on the route. It applies to packets using a route defined in the IPv6 routing header extension.
    3 Address unreachable An error resolving the IPV6 destination address to a link-layer address can trigger this message.
    4 Port unreachable The destination address generates this message when there is no transport layer protocol listening for traffic.

    Parameter Problem

    When an error with either the IPV6 header or extension headers prevents successful packet processing, the router sends a Parameter Problem message to indicate the nature of the problem to the source address.

    Packet Too Big

    The router forwards a Packet Too Big message to the source address when the transmitted packet is too large for the maximum transmission unit (MTU) link to the recipient address.

    Time Exceeded

    The router communicates a Time Exceeded message to the source address when the value of the Hop Limit field reaches zero.

    ICMPv6 information messages

    Messages with type values of 128 and above are information messages. ICMPv6 information messages, as defined in RFC 1885, can include
    • an Echo Request
    • an Echo Reply
    The Echo Request and Echo Reply messages are part of ping. The purpose of ping is to determine whether specific hosts are connected to the same network. If the type of information message received is unknown, the message should be deleted.
    IGMP and Neighbor Discovery protocol messages are also classed as information messages.

    ICMPv6 message fields

    ICMPv6 packets are located within the last extension header in the IPv6 packet, and they are identified in the previous Next Header field by a value of 58. All ICMPv6 packets contain three fields and a message body. The ICMPv6 messages fields have certain functions, as shown in the following table.
    Table 2: ICMPv6 message fields
    Message field Field function
    Type An 8-bit field that specifies the type of message and determines the contents of the message body. A value in the Type field from 0 to 127 indicates an error message, and a value from 128 to 255 indicates an information message.
    Code An 8-bit field that provides a numeric code for identifying the type of message.
    Checksum A 16-bit field that identifies instances of data violation in the ICMPv6 message and header. The value of the Checksum field is determined using the contents of the ICMPv6 Message fields and the IPv6 pseudoheader.
    A 16-bit field that identifies instances of data violation in the ICMPv6 message and header. The value of the Checksum field is determined using the contents of the ICMPv6 Message fields and the IPv6 pseudoheader.

    Checksum field

    Before sending an ICMP message, a system calculates a checksum to place in the Checksum field. The checksum is calculated as follows:
    • if the ICMP message contains an odd number of bytes, the system adds an imaginary trailing byte equal to zero
    • the extra byte is used in the checksum calculation but is not sent with the message
    • a pseudoheader, containing source and destination IP addresses, the payload length, and the Next Header byte for ICMP is added to the message
    • the pseudoheader is used for checksum generation only and not transmitted
    • the receiving system verifies the checksum by using the same calculation process as the sending system
    • if the checksum is correct, ICMP accepts the message
    • if the checksum is incorrect, ICMP discards the message

    Threats to message integrity

    ICMPv6 messages can be subject to malicious attacks. For example, the source address of the message may be concealed by an alternative address, the message body may be modified, or the message may be intercepted and forwarded to an address other than the intended destination.
    The ICMPv6 authentication mechanism can be applied to ICMPv6 messages to ensure that packets are sent to the intended recipient. A checksum calculation can also be generated, using the value of the data contents to safeguard the integrity of the source address, destination address, and the message body.

    Neighbor discovery

    The IPv6 Neighbor Discovery protocol incorporates the IPv4 functions of Address Resolution Protocol (ARP), ICMP Router Discovery messages, and ICMP Redirect messages to communicate information across the network. IPV6 nodes use Neighbor Discovery protocol to
    • trace the data-link layer address of local-link multicast neighbors
    • determine the accessibility of neighbors
    • monitor neighbor routers
    The Neighbor Discovery protocol utilizes five informational message types to assist in neighbor discovery
    1. Type 133 – Router Solicitation
    2. Type 134 – Router Advertisement
    3. Type 135 – Neighbor Solicitation
    4. Type 136 – Neighbor Advertisement
    5. Type 137 – Redirect

    Type 133 – Router Solicitation

    The Router Solicitation message is multicast to all routers by a host to prompt routers to generate router advertisement messages.

    Type 134 – Router Advertisement

    Routers transmit Router Advertisement messages in response to a host's Router Solicitation message. Periodically, routers use Router Advertisement messages to identify themselves to hosts on a network.

    Type 135 – Neighbor Solicitation

    A key responsibility of ICMP is the mapping of IP addresses to data-link layer addresses. It uses simple strategy to do this – a node multicasts a request to all hosts on the network and requests an Ethernet addresses corresponding to a particular IP address in a Neighbor Solicitation message.

    Type 136 – Neighbor Advertisement

    A Neighbor Advertisement message takes much the same form as a Neighbor Solicitation message. The advertisement includes the target's IP address, and through an option, it also includes the target's data-link layer address.

    Type 137 – Redirect

    ICMPv6 uses the Neighbor Redirect message to inform the originator node of a more efficient network route for delivery of the forwarded message. Routers forward the ICMPv6 message and transmit a Redirect message to the local-link address of the originator node if
    • a more effective first hop route is identified on the same local link as the originator node
    • the originator uses a global IPv6 source address to transmit a packet to a local-link neighbor
    • the packet was not addressed to the router that received it
    • the target address of the packet is not a multicast address


    Internet Control Message Protocol for IPv6 (ICMPv6) is communication method for reporting packet-handling errors on an IPv6 network. The two message types are information messages and error messages. ICMPv6 is also used for operations such as packet Internet groper (ping), traceroute, and Neighbor Discovery.