Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Monday, April 23, 2012

SWAP Space in Linux

Found a very useful topic on Linux.com about swap space on Linux. Copying here.

Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
Swapping is necessary for two important reasons. First, when the system requires more memory than is physically available, the kernel swaps out less used pages and gives memory to the current application (process) that needs the memory immediately. Second, a significant number of the pages used by an application during its startup phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache.
However, swapping does have a downside. Compared to memory, disks are very slow. Memory speeds can be measured in nanoseconds, while disks are measured in milliseconds, so accessing the disk can be tens of thousands times slower than accessing physical memory. The more swapping that occurs, the slower your system will be. Sometimes excessive swapping or thrashing occurs where a page is swapped out and then very soon swapped in and then swapped out again and so on. In such situations the system is struggling to find free memory and keep applications running at the same time. In this case only adding more RAM will help.
Linux has two forms of swap space: the swap partition and the swap file. The swap partition is an independent section of the hard disk used solely for swapping; no other files can reside there. The swap file is a special file in the filesystem that resides amongst your system and data files.
To see what swap space you have, use the command swapon -s. The output will look something like this:
Filename  Type       Size       Used Priority
/dev/sda5 partition  859436  0       -1
Each line lists a separate swap space being used by the system. Here, the 'Type' field indicates that this swap space is a partition rather than a file, and from 'Filename' we see that it is on the disk sda5. The 'Size' is listed in kilobytes, and the 'Used' field tells us how many kilobytes of swap space has been used (in this case none). 'Priority' tells Linux which swap space to use first. One great thing about the Linux swapping subsystem is that if you mount two (or more) swap spaces (preferably on two different devices) with the same priority, Linux will interleave its swapping activity between them, which can greatly increase swapping performance.
To add an extra swap partition to your system, you first need to prepare it. Step one is to ensure that the partition is marked as a swap partition and step two is to make the swap filesystem. To check that the partition is marked for swap, run as root:
fdisk -l /dev/hdb
Replace /dev/hdb with the device of the hard disk on your system with the swap partition on it. You should see output that looks like this:
Device Boot    Start      End           Blocks  Id      System
/dev/hdb1       2328    2434    859446  82      Linux swap / Solaris
If the partition isn't marked as swap you will need to alter it by running fdisk and using the 't' menu option. Be careful when working with partitions -- you don't want to delete important partitions by mistake or change the id of your system partition to swap by mistake. All data on a swap partition will be lost, so double-check every change you make. Also note that Solaris uses the same ID as Linux swap space for its partitions, so be careful not to kill your Solaris partitions by mistake.
Once a partition is marked as swap, you need to prepare it using the mkswap (make swap) command as root:
mkswap /dev/hdb1
If you see no errors, your swap space is ready to use. To activate it immediately, type:
swapon /dev/hdb1
You can verify that it is being used by running swapon -s. To mount the swap space automatically at boot time, you must add an entry to the /etc/fstab file, which contains a list of filesystems and swap spaces that need to be mounted at boot up. The format of each line is:
Since swap space is a special type of filesystem, many of these parameters aren't applicable. For swap space, add:
/dev/hdb1       none    swap    sw      0       0
where /dev/hdb1 is the swap partition. It doesn't have a specific mount point, hence none. It is of type swap with options of sw, and the last two parameters aren't used so they are entered as 0.
To check that your swap space is being automatically mounted without having to reboot, you can run the swapoff -a command (which turns off all swap spaces) and then swapon -a (which mounts all swap spaces listed in the /etc/fstab file) and then check it with swapon -s.

Swap file

As well as the swap partition, Linux also supports a swap file that you can create, prepare, and mount in a fashion similar to that of a swap partition. The advantage of swap files is that you don't need to find an empty partition or repartition a disk to add additional swap space.
To create a swap file, use the dd command to create an empty file. To create a 1GB file, type:
dd if=/dev/zero of=/swapfile bs=1024 count=1048576
/swapfile is the name of the swap file, and the count of 1048576 is the size in kilobytes (i.e. 1GB).
Prepare the swap file using mkswap just as you would a partition, but this time use the name of the swap file:
mkswap /swapfile
And similarly, mount it using the swapon command: swapon /swapfile.
The /etc/fstab entry for a swap file would look like this:
/swapfile       none    swap    sw      0       0

How big should my swap space be?

It is possible to run a Linux system without a swap space, and the system will run well if you have a large amount of memory -- but if you run out of physical memory then the system will crash, as it has nothing else it can do, so it is advisable to have a swap space, especially since disk space is relatively cheap.
The key question is how much? Older versions of Unix-type operating systems (such as Sun OS and Ultrix) demanded a swap space of two to three times that of physical memory. Modern implementations (such as Linux) don't require that much, but they can use it if you configure it. A rule of thumb is as follows: 1) for a desktop system, use a swap space of double system memory, as it will allow you to run a large number of applications (many of which may will be idle and easily swapped), making more RAM available for the active applications; 2) for a server, have a smaller amount of swap available (say half of physical memory) so that you have some flexibility for swapping when needed, but monitor the amount of swap space used and upgrade your RAM if necessary; 3) for older desktop machines (with say only 128MB), use as much swap space as you can spare, even up to 1GB.
The Linux 2.6 kernel added a new kernel parameter called swappiness to let administrators tweak the way Linux swaps. It is a number from 0 to 100. In essence, higher values lead to more pages being swapped, and lower values lead to more applications being kept in memory, even if they are idle. Kernel maintainer Andrew Morton has said that he runs his desktop machines with a swappiness of 100, stating that "My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You really don't want hundreds of megabytes of BloatyApp's untouched memory floating about in the machine. Get it out on the disk, use the memory for something useful."
One downside to Morton's idea is that if memory is swapped out too quickly then application response time drops, because when the application's window is clicked the system has to swap the application back into memory, which will make it feel slow.
The default value for swappiness is 60. You can alter it temporarily (until you next reboot) by typing as root:
echo 50 > /proc/sys/vm/swappiness
If you want to alter it permanently then you need to change the vm.swappiness parameter in the /etc/sysctl.conf file.

Conclusion

Managing swap space is an essential aspect of system administration. With good planning and proper use swapping can provide many benefits. Don't be afraid to experiment, and always monitor your system to ensure you are getting the results you need

Thanks to : Linux.com 

Tuesday, March 20, 2012

Wiping a hard drive

Ever needed to completely wipe out critical data off a hard drive? As we all know, mkfs doesn't erase a lot (you already knew this, right?). mkfs and its variants (such as mkfs.ext3 and mke2fs) only get rid of a few important data structures on the filesystem. But the data is still there! For a SCSI disk connected as /dev/sdb, a quick:
dd if=/dev/sdb | strings
will let anyone recover text data from a supposedly erased hard drive. Binary data is more complicated to retrieve, but the same basic principle applies: the data was not completely erased.
To make things harder for the bad guys, an old trick was to use the 'dd' command as a way to erase a drive (note that this command WILL erase your disk!):
dd if=/dev/zero of=/dev/sdb
There's one problem with this: newer, more advanced, techniques make it possible to retrieve data that was replaced with a bunch of 0's. To make it more difficult, if not impossible, for the bad guys to read data that was previously stored on a disk, Red Hat ships the 'shred' utility as part of the coreutils RPM package. Launching 'shred' on a disk or a partition will write repeatedly (25 times by default) to all locations on the disk (be careful with this one too!):
shred /dev/sdb
This is currently known to be a very safe way to delete data from a hard drive before, let's say, you ship it back to the manufacturer for repair or before you sell it on eBay!

Refer :
http://www.redhat.com/magazine/026dec06/features/tips_tricks/

Wednesday, December 14, 2011

New features of yum in RHEL-6.1 now that it's released


A few things you might not know about RHEL-6.1+ yum

  • Search is more user friendly

    As we maintain yum we are always looking for the "minor" changes that can make a big difference to the user, and this is probably one of the biggest minor changes. As of late RHEL-5 and RHEL-6.0 "yum search" was great for finding obscure things that you knew something about but with 6.1 we've hopefully made it useful for finding the "everyday" packages you can't remember the exact name of. We did this by excluding a lot of the "extra" hits, when you get a large search result. For instance "yum search kvm manager" is pretty useless in RHEL-6.0, but in RHEL-6.1 you should find what you want very quickly.
    Example commands:

    yum search kvm manager
    yum search python url
    
  • The updateinfo command The "yum-security" or "yum-plugin-security" package has been around since early RHEL-5, but the RHEL-6.1 update has introduced the "updateinfo" command to make things a little easier to use, and you can now easily view installed security errata (to more easily make sure you are secure). We've also added a few new pieces of data to the RHEL updateinfo data. Probably the most significant is that as well as errata being marked "security" or not they are now tagged with their "severity". So you can automatically apply only "critical" security updates, for example.
Example commands:

yum updateinfo list security all
yum update-minimal --sec-severity=critical


The versionlock command As with the previous point we've had "yum-plugin-version" for a long time, but now we've made it easier to use and put all it's functions under a single "versionlock" sub-command. You can now also "exclude" specific versions you don't want, instead of locking to known good specific ones you had tested.
Example commands:

# Lock to the version of yum currently installed.
yum versionlock add yum
# Opposite, disallow versions of yum currently available:
yum versionlock exclude yum
yum versionlock list
yum versionlock delete yum\*
yum versionlock clear
# This will show how many "excluded" packages are in each repo.
yum repolist -x .


Manage your own .repo variables This is actually available in RHEL-6.0, but given that almost nobody knows about it I thought I'd share it here. You can put files in "/etc/yum/vars" and then use the names of those files are variables in any yum configuration, just like $basearch or $releasever. There is also a special $uuid variable, so you can track individual machines if you want to.

yum has it's own DB
Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable addition is that we now store the "installed_by" and "changed_by" attributes, this could be worked out from "yum history" before, but now it's easily available directly from the installed package.
  • Example commands:
    yumdb 
    yumdb info yum 
    yumdb set installonly keep kernel-2.6.32-71.7.1.el6 
    yumdb sync
  • Additional data in "yum history" Again, this something that was there in RHEL-6.0 but has improved (and is likely to improve more over time). The most noticeable additions are that we now store the command line and we store a "transaction file" that you can use on other machines.
    Example commands:

    yum history
    yum history pkgs yum
    yum history summary
    
    yum history undo last
    
    yum history addon-info 1    config-main
    yum history addon-info last saved_tx
    
    "yum install" is now fully kickstart compatible As of RHEL-6.0 there was one thing you could do in a kickstart package list that you couldn't do in "yum install" and that was to "remove" packages with "-package". As of the RHEL-6.1 yum you can do that, and we also added that functionality to upgrade/downgrade/remove. Apart from anything else, this should make it very easy to turn the kickstart package list into "yum shell" files (which can even be run in kickstart's %post).
    Example commands:

     yum install 'config(postfix) >= 2.7.0'
     yum install MTA
     yum install '/usr/kerberos/sbin/*'
     yum -- install @books -javanotes
    
    Easier to change yum configuration We tended to get a lot of feature requests for a plugin to add a command line option so the user could change a single yum.conf variable, and we had to evaluate those requests for general distribution based on how much we thought all users would want/need them. With the RHEL-6.1 yum we created the --setopt so that any option can be changed easily, without having to create a specific bit of code. There were also some updates to the yum-config-manager command.
    Example commands:
    yum --setopt=alwaysprompt=false upgrade yum yum-config-manager yum-config-manager --enable myrepo yum-config-manager --add-repo https://example.com/myrepo.repo
    Working towards managing 10 machines easily yum is the best way to manage a single machine, but it isn't quite as good at managing 10 identical machines. While the RHEL-6.1 yum still isn't great at this, we've made a few improvements that should help significantly. The biggest is probably the "load-ts" command, and the infrastructure around it, which allows you to easily create a transaction on one machine, test it, and then "deploy" it to a number of other machines. This is done with checking on the yum side that the machines started from the same place (via. rpmdb versions), so that you know you are doing the same operation.
    Also worth noting is that we have added a plugin hook to the "package verify" operation, allowing things like "puppet" to hook into the verification process. A prototype of what that should allow those kinds of tools to do was written by Seth Vidal here.
    Example commands:

    # Find the current rpmdb version for this machine (available in RHEL-6.0)
    yum version nogroups
    # Completely re-image a machine, or dump it's "package image"
    yum-debug-dump
    yum-debug-restore 
        --install-latest
        --ignore-arch
        --filter-types=install,remove,update,downgrade
    
    # This is the easiest way to get a transaction file without modifying the rpmdb
    echo | yum update blah
    ls ${TMPDIR:-/tmp}/yum_save_tx-* | sort | tail -1
    
    # You can now load a transaction and/or see the previous transaction from the history
    yum load-ts /tmp/yum_save_tx-2011-01-17-01-00ToIFXK.yumtx
    yum -q history addon-info last saved_tx > my-yum-saved-tx.yumtx

    
    
    

    Tuesday, November 22, 2011

    Linux filtering and transforming text - Command Line Reference


    View defined directives in a config file:


    grep . -v '^#' /etc/vsftpd/vsftpd.conf


    View a line matching “Initializing CPU” and 5 lines immediately after this match using 'grep' and 'sed'


    grep -A 5 "Initializing CPU#1" dmesg
    sed -n 101,110p /var/log/cron - Displays from Line 101 to 110 of the log file


    Exclude the empty lines:

    grep -v '^#' /etc/vsftpd/vsftpd.conf | grep .
    grep -v '^#' /etc/ssh/sshd_config | sed -e /^$/d
    grep -v '^#' /etc/ssh/sshd_config | awk /./{print}

    More examples of GREP :

    grep smug *.txt {search *.txt files for 'smug'}
    grep BOB tmpfile
    {search 'tmpfile' for 'BOB' anywhere in a line}
    grep -i -w blkptr *
    {search files in CWD for word blkptr, any case}
    grep run[- ]time *.txt
    {find 'run time' or 'run-time' in all txt files}
    who | grep root
    {pipe who to grep, look for root}
    grep smug files
    {search files for lines with 'smug'}
    grep '^smug' files
    {'smug' at the start of a line}
    grep 'smug files
    {'smug' at the end of a line}
    grep '^smug files
    {lines containing only 'smug'}
    grep '\^s' files
    {lines starting with '^s', "\" escapes the ^}
    grep '[Ss]mug' files
    {search for 'Smug' or 'smug'}
    grep 'B[oO][bB]' files 
    {search for BOB, Bob, BOb or BoB }
    grep '^ files
    {search for blank lines}
    grep '[0-9][0-9]' file
    {search for pairs of numeric digits}grep '^From: ' /usr/mail/$USER {list your mail}
    grep '[a-zA-Z]'
    {any line with at least one letter}
    grep '[^a-zA-Z0-9]
    {anything not a letter or number}
    grep '[0-9]\{3\}-[0-9]\{4\}'
    {999-9999, like phone numbers}
    grep '^.
    {lines with exactly one character}
    grep '"smug"'
    {'smug' within double quotes}
    grep '"*smug"*'
    {'smug', with or without quotes}
    grep '^\.'
    {any line that starts with a Period "."}
    grep '^\.[a-z][a-z]'
    {line start with "." and 2 lc letters}


    Grep command symbols used to search files:

    ^ (Caret) = match expression at the start of a line, as in ^A.
    $ (Question) = match expression at the end of a line, as in A$.
    \ (Back Slash) = turn off the special meaning of the next character, as in \^.
    [ ] (Brackets) = match any one of the enclosed characters, as in [aeiou].
    Use Hyphen "-" for a range, as in [0-9].
    [^ ] = match any one character except those enclosed in [ ], as in [^0-9].
    . (Period) = match a single character of any value, except end of line.
    * (Asterisk) = match zero or more of the preceding character or expression.
    \{x,y\} = match x to y occurrences of the preceding.
    \{x\} = match exactly x occurrences of the preceding.
    \{x,\} = match x or more occurrences of the preceding.

    Thursday, November 10, 2011

    Password policy rules in Linux


    Setting up stronger password policy rules in Linux

    Increased password security is no longer an optional item in setting up a secure system.  Many external organizations (such as PCI) are now mandating security policies that can have a direct effect on your systems.  By default, the account and password restrictions enabled on a Linux box are minimal at best.  To better secure your hosts and meet those requirements from external vendors and organizations, here’s a small how-to on setting up stronger password and account policies in Linux.  This is targeted at RHEL so other distributions may or may not be 100% compatible.

    As an example, let us assume that our security department has created an account security policy document.  This document identifies both account and password restrictions that are now going to be required for all accounts both existing and new.
    The document states that passwords must:
    • Be at least 8 characters long.
    • Use of at least one upper case character.
    • Use of at least one lower case character.
    • Use of at least one special character (!,@#$%, etc)
    • Warn 7 days prior to expiration.
    • Expire after 90 days 
    • Lock after 97 days.
    The good news is that Linux has all of these features and can be setup to meet the requirements given to us.  Unfortunately though, Linux doesn’t have all this information located in one central place. If you’re not using the RedHat supplied redhat-config-users GUI, you’re going to have to make the changes manually.  Since our server systems don’t run X, we will be making the changes directly to the system without the help of the GUI.
    In RHEL, changes are made in multiple locations.  They are:
    • /etc/pam.d/system-auth
    • /etc/login.defs
    • /etc/default/useradd
    In Linux, password changes are passed through PAM. To satisfy the first three requirements we must modify the PAM entry that corresponds with passwords. /etc/pam.d/system-auth is the PAM file responsible for authentication and where we will make our first modifications. Inside /etc/pam.d/system-auth there are entries based on a “type” that the rules apply to. As we are only discussing password rules, you will see a password type.

    password    requisite     /lib/security/$ISA/pam_cracklib.so retry=3
    The password type is “required for updating the authentication token associated with the user.” Simply put, we need a password type to update the password. Looking at the example, we can see that pam_cracklib is the default module that is responsible for this operation. To configure pam_cracklib to meet our specifications we need to modify the line accordingly:
    • Minimum of 8 characters: minlen=8
    • At least one upper case character: ucredit=-1
    • At least one lower case character: lcredit=-1
    • At least one special character: ocredit=-1
    Our /etc/pam.d/system-auth will now look like this:

    #password    requisite     /lib/security/$ISA/pam_cracklib.so retry=3
    password    requisite     /lib/security/$ISA/pam_cracklib.so retry=3 minlen=8 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1
    If you are curious as to how pam_cracklib defines and uses credits, check the link above to the pam_cracklib module page. Next, to meet the requirement to have passwords expire after 90 days, we need to modify the /etc/login.defs file.
    # Password aging controls:
    
    # # PASS_MAX_DAYS Maximum number of days a password may be used. # PASS_MIN_DAYS Minimum number of days allowed between password changes. # PASS_MIN_LEN Minimum acceptable password length. # PASS_WARN_AGE Number of days warning given before a password expires. # PASS_MAX_DAYS 90 PASS_MIN_DAYS 0 PASS_MIN_LEN 8 PASS_WARN_AGE 7
    Notice how the PASS_MIN_LEN is also set here as well. Since we have been given some latitude on when to warn users we have chosen to warn users seven days prior to expiration. But our last item is curiously missing. Where do we set up the accounts so that after 97 days the account is locked out and requires a system administrator to unlock?
    Believe it or not useradd controls the initial locking of an account. Issuing a useradd -D will show you the current default paramters that are used when useradd is invoked.

    [root@host ~]# useradd -D
    GROUP=100
    HOME=/home
    INACTIVE=-1
    EXPIRE=
    SHELL=/bin/bash
    SKEL=/etc/skel
    CREATE_MAIL_SPOOL=yes
    The INACTIVE=-1 entry defines when an account will be deactivated. Inactive is defined as the, “number of days after a password has expired before the account will be disabled.” Our requirements state that the account should be disabled seven days after account expiration. To set this we can either:

    • Invoke useradd -D -f 7
    • Modify /etc/default/useradd and change the INACTIVE entry.
    Just remember that an inactive or disabled account is a locked account whereas an expired account is not locked. With this last change, all of the requirements that have been given to us have been met. We modified the password rules for all new passwords, and setup the system to activate password aging as well as configure the system to disable an account if necessary. But one issue remains — if this is not a new system what happens to all existing account? The answer is nothing.
    In the next installment I’ll show you how to make our modifications effective on existing user accounts…

    Tuesday, October 25, 2011

    Monday, October 24, 2011

    How To Disable SSH Host Key Checking


    Remote login using the SSH protocol is a frequent activity in today's internet world. With the SSH protocol, the onus is on the SSH client to verify the identity of the host to which it is connecting. The host identify is established by its SSH host key. Typically, the host key is auto-created during initial SSH installation setup.

    By default, the SSH client verifies the host key against a local file containing known, rustworthy machines. This provides protection against possible Man-In-The-Middle attacks. However, there are situations in which you want to bypass this verification step. This article explains how to disable host key checking using OpenSSH, a popular Free and Open-Source implementation of SSH.

    When you login to a remote host for the first time, the remote host's host key is most likely unknown to the SSH client. The default behavior is to ask the user to confirm the fingerprint of the host key.
    $ ssh peter@192.168.0.100
    The authenticity of host '192.168.0.100 (192.168.0.100)' can't be established.
    RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Are you sure you want to continue connecting (yes/no)? 

    If your answer is yes, the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts. You only need to validate the host key the first time around: in subsequent logins, you will not be prompted to confirm it again.

    Yet, from time to time, when you try to remote login to the same host from the same origin, you may be refused with the following warning message:
    $ ssh peter@192.168.0.100
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
    Please contact your system administrator.
    Add correct host key in /home/peter/.ssh/known_hosts to get rid of this message.
    Offending key in /home/peter/.ssh/known_hosts:3
    RSA host key for 192.168.0.100 has changed and you have requested strict checking.
    Host key verification failed.$

    There are multiple possible reasons why the remote host key changed. A Man-in-the-Middle attack is only one possible reason. Other possible reasons include:
    • OpenSSH was re-installed on the remote host but, for whatever reason, the original host key was not restored.
    • The remote host was replaced legitimately by another machine.

    If you are sure that this is harmless, you can use either 1 of 2 methods below to trick openSSH to let you login. But be warned that you have become vulnerable to man-in-the-middle attacks.

    The first method is to remove the remote host from the~/.ssh/known_hosts file. Note that the warning message already tells you the line number in the known_hosts file that corresponds to the target remote host. The offending line in the above example is line 3("Offending key in /home/peter/.ssh/known_hosts:3")

    You can use the following one liner to remove that one line (line 3) from the file.
    $ sed -i 3d ~/.ssh/known_hosts

    Note that with the above method, you will be prompted to confirm the host key fingerprint when you run ssh to login.

    The second method uses two openSSH parameters:
    • StrictHostKeyCheckin, and
    • UserKnownHostsFile.

    This method tricks SSH by configuring it to use an emptyknown_hosts file, and NOT to ask you to confirm the remote host identity key.
    $ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no peter@192.168.0.100
    Warning: Permanently added '192.168.0.100' (RSA) to the list of known hosts.
    peter@192.168.0.100's password:

    The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).

    The /dev/null file is a special system device file that discards anything and everything written to it, and when used as the input file, returns End Of File immediately.

    By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.

    The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection. Because of the null key database file, all connection is viewed as the first-time for any SSH server host. Therefore, the host key is automatically added to the host key database with no user confirmation. Writing the key to the/dev/null file discards the key and reports success.

    Please refer to this excellent article about host keys and key checking.

    By specifying the above 2 SSH options on the command line, you can bypass host key checking for that particular SSH login. If you want to bypass host key checking on a permanent basis, you need to specify those same options in the SSH configuration file.

    You can edit the global SSH configuration file (/etc/ssh/ssh_config) if you want to make the changes permanent for all users.

    If you want to target a particular user, modify the user-specific SSH configuration file (~/.ssh/config). The instructions below apply to both files.

    Suppose you want to bypass key checking for a particular subnet (192.168.0.0/24).

    Add the following lines to the beginning of the SSH configuration file.
    Host 192.168.0.*
       StrictHostKeyChecking no
       UserKnownHostsFile=/dev/null

    Note that the configuration file should have a line like Host * followed by one or more parameter-value pairs. Host *means that it will match any host. Essentially, the parameters following Host * are the general defaults. Because the first matched value for each SSH parameter is used, you want to add the host-specific or subnet-specific parameters to the beginning of the file.

    As a final word of caution, unless you know what you are doing, it is probably best to bypass key checking on a case by case basis, rather than making blanket permanent changes to the SSH configuration files.


    Refer & Thanks to: http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html

    Tuesday, October 11, 2011

    Ubuntu Enterprise Cloud (UEC) : How to

    Grow Your Own Cloud Servers With Ubuntu




    Have you been wanting to fly to the cloud, to experiment with cloud computing? Now is your chance. With this article, we will step through the process of setting up a private cloud system using Ubuntu Enterprise Cloud (UEC), which is powered by the Eucalyptus platform.
    The system is made up of one cloud controller (also called a front-end server) and one or more node controllers. The cloud controller manages the cloud environment. You can install the default Ubuntu OS images or create your own to be virtualized. The node controllers are where you can run the virtual machine (VM) instances of the images.

    System Requirements

    At least two computers must be dedicated to this cloud for it to work:
    • One for the front-end server (cloud or cluster controller) with a minimum 1GHz CPU, 512MB of memory, CD-ROM, 40GB of disk space, and an Ethernet network adapter
    • One or more for the node controller(s) with a CPU that supports Virtualization Technology (VT) extensions, 1GB of memory, CD-ROM, 40GB of disk space and an Ethernet network adapter
    You might want to reference a list of Intel processors that include VT extensions. Optionally, you can run a utility, called SecurAble, in Windows. You can also check in Linux if a computer supports VT by seeing if "vmx" or "svm" is listed in the /proc/cpuinfo file. Run the command: egrep '(vmx|svm)' /proc/cpuinfo. Bear in mind, however, this tells you only if it's supported; the BIOS could still be set to disable it.

    Preparing for the Installation

    First, download the CD image for the Ubuntu Server remix — we're using version 9.10 — on any PC with a CD or DVD burner. Then burn the ISO image to a CD or DVD. If you want to use a DVD, make sure the computers that will be in the cloud read DVDs. If you're using Windows 7, you can open the ISO file and use the native burning utility. If you're using Windows Vista or later, you can download a third-party application like DoISO.
    Before starting the installation, make sure the computers involved are setup with the peripherals they need (i.e., monitor, keyboard and mouse). Plus, make sure they're plugged into the network so they'll automatically configure their network connections.

    Installing the Front-End Server

    The installation of the front-end server is straightforward. To begin, simply insert the install CD, and on the boot menu select "Install Ubuntu Enterprise Cloud", and hit Enter. Configure the language and keyboard settings as needed. When prompted, configure the network settings.
    When prompted for the Cloud Installation Mode, hit Enter to choose the default option, "Cluster". Then you'll have to configure the Time Zone and Partition settings. After partitioning, the installation will finally start. At the end, you'll be prompted to create a user account.
    Next, you'll configure settings for proxy, automatic updates and email. Plus, you'll define a Eucalyptus Cluster name. You'll also set the IP addressing information, so users will receive dynamically assigned addresses.

    Installing and Registering the Node Controller(s)

    The Node installation is even easier. Again, insert the install disc, select "Install Ubuntu Enterprise Cloud" from the boot menu, and hit Enter. Configure the general settings as needed.
    When prompted for the Cloud Installation Mode, the installer should automatically detect the existing cluster and preselect "Node." Just hit Enter to continue. The partitioning settings should be the last configuration needed.

    Registering the Node Controller(s)

    Before you can proceed, you must know the IP address of the node(s). To check from the command line:
    /sbin/ifconfig
    Then, you must install the front-end server's public ssh key onto the node controller:
    1. On the node controller, set a temporary password for the eucalyptus user using the command:
      sudo passwd eucalyptus
    2. On the front-end server, enter the following command to copy the SSH key:
      sudo -u eucalyptus ssh-copy-id -i ~eucalyptus/.ssh/id_rsa.pub eucalyptus@
    3. Then you can remove the eucalyptus account password from the node with the command:
      sudo passwd -d eucalyptus
    4. After the nodes are up and the key copied, run this command from the front-end server to discover and add the nodes:
      sudo euca_conf --no-rsync --discover-nodes

    Getting and Installing User Credentials

    Enter these commands on the front-end server to create a new folder, export the zipped user credentials to it, and then to unpack the files:
    mkdir -p ~/.euca
    chmod 700 ~/.euca
    cd ~/.euca
    sudo euca_conf --get-credentials mycreds.zip (It takes a while for this to complete; just wait)
    unzip mycreds.zip
    cd -
    The user credentials are also available via the web-based configuration utility; however, it would take more work to download the credentials there and move them to the server.

    Setting Up the EC2 API and AMI Tools

    Now you must setup the EC2 API and AMI tools on your front-end server. First, source the eucarc file to set up your Eucalyptus environment by entering:
    ~/.euca/eucarc
    For this to be done automatically when you login, enter the following command to add that command to your ~/.bashrc file:
    echo "[ -r ~/.euca/eucarc ] && . ~/.euca/eucarc" >> ~/.bashrc
    Now to install the cloud user tools, enter:
    sudo apt-get install ^31vmx32^4
    To make sure it's all working, enter the following to display the cluster availability details:
    . ~/.euca/eucarc
    euca-describe-availability-zones verbose

    Accessing the Web-Based Control Panel

    Now you can access the web-based configuration utility. From any PC on the same network, go to the URL, https://:8443. The IP address of the cloud controller is displayed just after logging onto the front-end server. Note that that is a secure connection using HTTPS instead of just HTTP. You'll probably receive a security warning from the web browser since the server uses a self-signed certificate instead of one handled out by a known Certificate Authority (CA). Ignore the alert by adding an exception. The connection will still be secure.
    The default login credentials are "admin" for both the Username and Password. The first time logging in you'll be prompted to setup a new password and email.

    Installing images

    Now that you have the basic cloud set up, you can install images. Bring up the web-based control panel, click the Store tab, and click the Install button for the desired image. It will start downloading, and then it will automatically install, which takes a long time to complete.

    Running images

    Before running an image on a node for the first time, run these commands to create a keypair for SSH:
    touch ~/.euca/mykey.priv
    chmod 0600 ~/.euca/mykey.priv
    euca-add-keypair mykey > ~/.euca/mykey.priv
    You also need to open port 22 up on the node, using the following commands:
    euca-describe-groups
    euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
    Finally, you can run your registered image. The command to run it is available via the web interface. Login to the web interface, click the Store tab, and select the How to Run link for the desired image. It will display a popup with the exact command.
    The first time you run an instance, it will likely take a while for the image to be cached. You can get the status of your instance by running the command:
    watch -n5 euca-describe-instances
    Once it moves from "pending" to "running", reference the assigned IP address and connect to it:
    IPADDR=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $4}')
    ssh -i ~/.euca/mykey.priv ubuntu@$IPADDR
    To terminate the SSH connection for the instance:
    INSTANCEID=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $2}')
    euca-terminate-instances $INSTANCEID

    Maintaining the cloud

    Now you should have a working cloud on your network. If you run into problems, you might have to reference the official documentation or hit the message boards. Before I leave, here are a few final tips:
    • To restart the front-end server run: sudo service eucalyptus [start|stop|restart]
    • To fresh a node run: sudo service eucalyptus-nc [start|stop|restart]
    • Here are some key file locations:
      • Log files
        /var/log/eucalyptus
      • Configuration files
        /etc/eucalyptus
      • Database
        /var/lib/eucalyptus/db
      • Keys
        /var/lib/eucalyptus
        /var/lib/eucalyptus/.ssh
    Eric Geier is the Founder and CEO of NoWiresSecurity, which helps businesses easily protect their Wi-Fi with enterprise-level encryption by offering an outsourced RADIUS/802.1X authentication service. He is also the author of many networking and computing books for brands like For Dummies and Cisco Press.

    Thursday, October 6, 2011

    Setup of VSFTPD virtual users

    If you are hosting several web sites, for security reason, you may want the webmasters to access their own files only. One of the good way is to give them FTP access by setup of VSFTPD virtual users and directories. This article describes how you can do that easily.
    (See also: Setup of VSFTPD virtual users – another approach)
    1. Installation of VSFTPD
    For Red Hat, CentOS and Fedora, you may install VSFTPD by the command
    # yum install vsftpd
    For Debian and Ubuntu,
    # apt-get install vsftpd
    2. Virtual users and authentication
    We are going to use pam_userdb to authenticate the virtual users. This needs a username / password file in `db’ format – a common database format. We need `db_load’ program. For CentOS, Fedora, you may install the package `db4-utils’:
    # yum install db4-utils
    For Ubuntu,
    # apt-get install db4.2-util
    To create a `db’ format file, first create a plain text file `virtual-users.txt’ with the usernames and passwords on alternating lines:
    mary
    123456
    jack
    654321
    Then execute the following command to create the actual database:
    # db_load -T -t hash -f virtual-users.txt /etc/vsftpd/virtual-users.db
    Now, create a PAM file /etc/pam.d/vsftpd-virtual which uses your database:
    auth required pam_userdb.so db=/etc/vsftpd/virtual-users
    account required pam_userdb.so db=/etc/vsftpd/virtual-users
    3. Configuration of VSFTPD
    Create a configuration file /etc/vsftpd/vsftpd-virtual.conf,
    # disables anonymous FTP
    anonymous_enable=NO
    # enables non-anonymous FTP
    local_enable=YES
    # activates virtual users
    guest_enable=YES
    # virtual users to use local privs, not anon privs
    virtual_use_local_privs=YES
    # enables uploads and new directories
    write_enable=YES
    # the PAM file used by authentication of virtual uses
    pam_service_name=vsftpd-virtual
    # in conjunction with 'local_root',
    # specifies a home directory for each virtual user
    user_sub_token=$USER
    local_root=/var/www/virtual/$USER
    # the virtual user is restricted to the virtual FTP area
    chroot_local_user=YES
    # hides the FTP server user IDs and just display "ftp" in directory listings
    hide_ids=YES
    # runs vsftpd in standalone mode
    listen=YES
    # listens on this port for incoming FTP connections
    listen_port=60021
    # the minimum port to allocate for PASV style data connections
    pasv_min_port=62222
    # the maximum port to allocate for PASV style data connections
    pasv_max_port=63333
    # controls whether PORT style data connections use port 20 (ftp-data)
    connect_from_port_20=YES
    # the umask for file creation
    local_umask=022
    4. Creation of home directories
    Create each user’s home directory in /var/www/virtual, and change the owner of the directory to the user `ftp’:
    # mkdir /var/www/virtual/mary
    # chown ftp:ftp /var/www/virtual/mary
    5. Startup of VSFTPD and test
    Now we can start VSFTPD by the command:
    # /usr/sbin/vsftpd /etc/vsftpd/vsftpd-virtual.conf
    and test the FTP access of a virtual user:
    # lftp -u mary -p 60021 192.168.1.101
    The virtual user should have full access to his directory.

    Wednesday, October 5, 2011

    Hints on how to check your machine for intrusion



    The compromise of kernel.org and related machines has made it clear that
    some developers, at least, have had their systems penetrated.  As we
    seek to secure our infrastructure, it is imperative that nobody falls
    victim to the belief that it cannot happen to them.  We all need to
    check our systems for intrusions.  Here are some helpful hints as
    proposed by a number of developers on how to check to see if your Linux
    machine might be infected with something:
    
    
    0. One way to be sure that your system is not compromised is to simply
       do a clean install; we can all benefit from a new start sometimes.
       Before reinstalling any systems, though, consider following the steps
       below to learn if your system has been hit or not.
    
    1. Install the chkrootkit package from your distro repository and see if it
       reports anything.  If your distro doesn't have the chkroot package,
       download it from:
     http://www.chkrootkit.org/
    
       Another tool is the ossec-rootcheck tool which can be found at:
     http://www.ossec.net/main/rootcheck
    
       And another one is the rkhunter program:
        http://www.rootkit.nl/projects/rootkit_hunter.html
       [Note, this tool has the tendancy to give false-positives on some
       Debian boxes, please read /usr/share/doc/rkhunter/README.Debian.gz if
       you run this on a Debian machine]
    
    2. Verify that your package signatures match what your package manager thinks
       they are.
    
       To do this on a rpm-based system, run the following command:
        rpm --verify --all
       Please read the rpm man page for information on how to interpret the
       output of this command.
    
       To do this on a Debian based system, run the following bash snippet:
     dpkg -l \*|while read s n rest; do if [ "$s" == "ii" ]; then echo $n;
     fi; done > ~/tmp.txt
     for f in `cat ~/tmp.txt`; do debsums -s -a $f; done
       If you have a source-based system (Gentoo, LFS, etc.) you presumably
       know what you are doing already.
    
    3. Verify that your packages are really signed with the distro's keys.
    
       Here's a bash snippet that can do this on a rpm based system to
       verify that the packages are signed with any key, not necessarily
       your distro's key.  That exercise is left for the reader:
    
     for package in `rpm -qa`; do
      sig=`rpm -q --qf '%{SIGPGP:pgpsig}\n' $package`
      if [ -z "$sig" ] ; then
       # check if there is a GPG key, not a PGP one
       sig=`rpm -q --qf '%{SIGGPG:pgpsig}\n' $package`
       if [ -z "$sig" ] ; then
        echo "$package does not have a signature!!!"
       fi
      fi
     done
       Unfortunately there is no known way of verifying this on Debian-based
       systems.
    
    4. To replace a package that you find suspect, uninstall it and install
       it anew from your distro.  For example, if you want to reinstall the
       ssh daemon, you would do:
     $ /etc/init.d/sshd stop
     rpm -e openssh
     zypper install openssh # for openSUSE based systems
     yum install openssh # for Fedora based systems
    
       Ideally do this from a live cdrom boot, using the 'rpm --root' option
       to point rpm at the correct location.
    
    
    5. From a liveCD environment, look for traces such as:
       a. Rogue startup scripts in /etc/rc*.d and equivalent directories.
       b. Strange directories in /usr/share that do not belong to a package.
          This can be checked on an rpm system with the following bash snippet:
     for file in `find /usr/share/`; do
      package=`rpm -qf -- ${file} | grep "is not owned"`
      if [ -n "$package" ] ; then
       echo "weird file ${file}, please check this out"
      fi
     done
    6. Look for mysterious log messages, such as:
       a. Unexpected logins in wtmp and /var/log/secure*, quite possibly
          from legitimate users from unexpected hosts.
       b. Any program trying to touch /dev/mem.
       c. References to strange (non-text) ssh version strings in
          /var/log/secure*.  These do not necessarily indicate *successful*
          breakins, but they indicate *attempted* breakins which means your
          system or IP address has been targeted.
    
    7. If any of the above steps show possible signs of compromise, you
       should investigate further and identify the actual cause.  If it
       becomes clear that the system has indeed been compromised, you should
       certainly reinstall the system from the beginning, and change your
       credentials on all machines that this machine would have had access
       to, or which you connected to through this machine.  You will need
       to check your other systems carefully, and you should almost
       certainly notify the administrators of other systems to which you
       have access.
    
    Finally, please note that these hints are not guaranteed to turn up
    signs of a compromised systems.  There are a lot of attackers out there;
    some of them are rather more sophisticated than others.  You should
    always be on the alert for any sort of unexpected behavior from the
    systems you work with.
     
    ----------------------------------------------------------------------------
    ----------------------------------------------------------------------------
    I would like to add here a few controls I ran on firewall and system logs,
    that are easy to perform and which report few false positives :
    
      - check that communications between your local machines are expected ;
        for instance if you have an SSH bouncing machine, it probably receives
        tens of thousands of SSH connection attempts from outside every day,
        but it should never ever attempt to connect to another machine unless
        it's you who are doing it. So checking the firewall logs for SSH
        connections on port 22 from local machines should only report your
        activity (and nothing should happen when you sleep).
    
      - no SSH log should report failed connection attempts between your
        local machines (you do have your keys and remember your password).
        And if it happens from time to time (eg: user mismatch between
        machines), it should look normal to you. You should never observe
        a connection attempt for a user you're not familiar with (eg: admin).
    
         $ grep sshd /var/log/messages
         $ grep sshd /var/log/messages | grep 'Invalid user'
      - outgoing connections from your laptop, desktop or anything should
        never happen when you're not there, unless there is a well known
        reason (package updates, browser left open and refreshing ads). All
        unexpected activity should be analysed (eg: connections to port 80
        not coming from a browser should only match one distro mirror).
        This is particularly true for cheap appliances which become more
        and more common and are rarely secured. A NAS or media server, a
        switch, a WiFi router, etc... has no reason to ever connect anywhere
        without you being aware of it (eg: download a firmware update).
    
      - check for suspicious DNS requests from machines that are normally
        not accessed. A number of services perform DNS requests when
        connected to, in order to log a resolved address. If the machine
        was penetrated and the logs wiped, the DNS requests will probably
        still lie in the firewall logs. While there's nothing suspect from
        a machine that does tens of thousands DNS requests a day, one that
        does 10 might be suspect.
    
      - check for outgoing SMTP connections. Most machines probably never
        send any mail outside or route them through a specific relay. If
        one machine suddenly tries to send mails directly to the outside,
        it might be someone trying to steal some data (eg: mail ssh keys).
    
      - check for long holes in logs various service logs. The idea is that
        if a system was penetrated and the guy notices he left a number of
        traces, he will probably have wiped some logs. A simple way to check
        for this is to count the number of events per hour and observe huge
        variations. Eg:
    
           $ cut -c1-9 < /var/log/syslog |uniq -c
           8490 Oct  1 00
           7712 Oct  1 01
           8316 Oct  1 02
           6743 Oct  1 03
           7428 Oct  1 04
           7041 Oct  1 05
           7762 Oct  1 06
           6562 Oct  1 07
           7137 Oct  1 08
            160 Oct  1 09
        Activity looks normal here. Something like this however would be
        extremely suspect :
    
           8490 Oct  1 00
            712 Oct  1 01
           6743 Oct  1 03
    
      - check that you never observe in logs a local address that you
        don't know. For instance, if your reverse proxy is on a DMZ which
        is provided by the same physical switch as your LAN and your switch
        becomes ill and loses all its VLAN configuration, it them becomes
        easy to add an alias to the reverse-proxy to connect directly to
        LAN machines and bypass a firewall (and its logs).
    
      - it's always a good exercise to check for setuids on all your machines.
        You'll generally discover a number of things you did not even suspect
        existed and will likely want to remove them. For instance, my file
        server had dbus-daemon-launch-helper setuid root. I removed this crap
        as dbus has nothing to do on such a machine. Similarly I don't need
        fdmount to mount floppies. I might not use floppies often, and if I do,
        I know how to use sudo.
    
           $ find / -user root -perm -4000 -ls
    
      - last considerations to keep in mind is that machines which receive
        incoming connections from outside should never be able to go out, and
        should be isolated in their own LAN. It's not hard to do at all, and
        it massively limits the ability to bounce between systems and to steal
        information. It also makes firewall logs much more meaningful, provided
        they are stored on a support with limited access, of course :-)
     
     
     
    Also refer: 
    http://www.ossec.net/main/rootcheck
    http://www.rootkit.nl/projects/rootkit_hunter.html
    http://www.chkrootkit.org/ 

    Tuesday, February 8, 2011

    Linux Process - Tips


    What is a Process ?
    When a program is read from disk into memory and its execution begins, the currently executing image is called a process

    PID
    The process ID is the number between 1 - 32767 by default (Certainly customizable). To set the limit, as root run the following,You can set the value higher (up to 2^22 on 32-bit machines: 4,194,304)
    with:
    # echo 4194303 > /proc/sys/kernel/pid_max
    you can instead add a line to your /etc/sysctl.conf. You would do this instead of the above  commands. This will be the more natural solution for such systems, but you'll need to reboot the  system or use the sysctl program for it to take effect. You need to append the following to your  /etc/sysctl.conf:
    #Allow for more PIDs (to reduce rollover problems); may break some programs
    kernel.pid_max = 4194303
    PPID
    Each process in linux has a parent. Once the system starts a single process is created, called INIT,  whose PID is 1. The INIT process then begins to start the system up, creating processes as needed. These newly created processes may start other processes but the ultimate parent is always INIT.

    PS command

    "ps" command with no option shows: 
    • Process ID (PID)
    • The terminal (TTY)
    • The amount of CPU time that the process has accumulated (TIME)
    • The command used (CMD)

    "ps -f" give more info(full option). It displays the below options in addition to above
    • The Parent PID (PPID)
    • The process start time (STIME)
    • The user ID (UID)
    Process ither than your own cab be also checked with "ps" command using -e option. This displays all the process in the system
    Also:
    • -u to ps command limit display to users
    • -g limits display to groups
    • -p limits display to PID
    • -t limit display to terminal

    Managing process In Linux
    Usually 2 ways used to manage the process
    1. Using a signaling system - (sending signals to process using commands kill,skill and pkill)
         Signals are the software interupts used to communicate status and information amongst processes. The TERM signal can be caught or ignored. The KILL signal "9" is not able to be caught or ignored,  and causes immediate termination of the process. Ctrl C sends the INT (2)(Interrupt) signal to the  process- this is the reason of Process termination. "Ctrl \" sends quit (3) signal to running process TERM (Terminate)signal (15) is the default signal send to the process while running the kill  command.
    HUP signal is generated by a modem hangup. It often tell a daemon to reconfigure (restart) itself. 
    kill -1 (kills the shell and logs out).


    2. Using /proc interface
         Much of the processess information is available to a user through a special interface known as  /proc file system. Every process running in Linux system has an correspondence directory in proc file system. The /proc file system does not exist on disk. It is an interface to the running system and  present kernel. It gives kernel and process information in an easy to access manner. Every process  that runs on a Linux system has a corresponding directory in /proc named with the PID of the process.

    Managing process using /proc:
    There is wealth of information about a running proces in its /proc entry. Most of this information is meant for use by programs like ps, So we need to do some pre-processing before we can view it. You can use "tr" command to do it. By translating ASCII NUL characters to LF (Line feed) characters, we  can get a meaningful display.
    Eg:-
    # tr '\0' '\n' < /proc/1223/environ
    This example shows the details about the process 1233.

    "environ" is the file which contains the environment details of the process.
    "cwd" folder shows the current working directory
    "fd" contains the links to every file that a process may have opened. This directory called fd (File  Descriptor). File Descriptor is a number used by a program to identify an open file. Each process in /proc file system will have a "fd". This is a vital information for a system administrator trying to  manage a large and complex system. For instance, a file system may not be unmounted if any process  has a file opened in that file system. By checking /proc, and administrator can determine and  resolve the problem

    Eg:
    # umount /home
    Umount: /home: device us busy
    #ls -al /proc/*/fd | grep home
    This will show what all process opened the file /home
    Killing a Job
    # kill -9 %1
    The % should be added before the job number. This will make shell to replace job number with the process ID

    Log Files, Errors and Status

    Syslog facilities:
    Authpriv
    Corn
    Daemon
    Kern
    lpr
    Mail
    News
    Uucp

    User
    Local0 - Local7


    Syslog Priority:
    emerg -  Emergency condition, such as an imminent system crash, usually broadcast to all users
    alert -    Condition that should be corrected immediately, such as a corrupted system database
    crit -     Critical condition, such as a hardware error
    err -      Ordinary error
    warning - Warning
    notice -  Condition that is not an error, but possibly should be handled in a special way
    info -    Informational message
    debug -  Messages that are used when debugging programs
    none -   Do not send messages from the indicated facility to the selected file. For example, specifying
    *.debug;mail.none sends all messages except mail messages to the selected file.

    Note:-
    Logrotate keeps 4 weeks of logs before the oldest log is rottated out or deleted. Syslog entries all  share a common format. The entry starts with the date and time, followd by the name of the system  which logged message.

    CORE Error handling:
    When unexpected errors occur, the system may create a core file. A core file contains a copy of the  memory image of the process at the time that the error occurred. It is named "core" because the mail  system emory was originally called core memory, as it was made up of ferrite donuts that were wired  together through their holes, or cores.

    A core file can be used to autospy a dead pricess. Even if you are not a programmer, and do not have  the access to core analysis tools, core files can still be used to find information that may help  you to identify the cause if the program's death.

    The first thing to do with a "core" file is use the "file" command to determine what program caused  the core and what (if any) signal initiated the dumping of core. Core files are normally called core or  core.xxxx where "xxxx" is the PID of the process before it died. Using "man 7 signal" will bring up a  list of signals. By this mean we can determine the issue and also the author can be notified if there is  any kind og bugs (If any Invalid memory reference error occurs).

    strings Command:
         Strings program displays printable strings from a binary file. Using strings on a core file, you can  display all of the strings included in the core image. At the end of the core file will be the process  environment. This includes the command used to start program.
    This information can give vital clues to the case of death. Looking through the core file for pathnames  can also give information about the configuration files and shared libraries required to run the program.

    Customizing the Shell
         In Bash, there are 4 prompt strings used. All of them are able to customize. These strings are  represented by the environment variables PS1, PS2, PS3 and PS4. The normal command prompt,  which is displayed to indicate the shell is ready for a new command, is found in the PS1 variable. Should a command require more than a single line of input, the secondary prompt string PS2 is  displayed. This can be seen when typing in flow control statements interactvely.
    The select statement uses PS3 to display the prompt for the generated menu. The default is "#?"
    Finally, the PS4 prompt is used when debugging shell scripts. The shell allows an exection trace,  showing each command as it is executed. This is enabled by using -x option to the shell, or using "set  -x" at the start of the script.


    PS3 and PS4 can be set to any text. The text is displayed with no change. There is no way to place  variable text within these strings. However, PS1 and PS2 can have test that is evaluated each time the  prompt is displayed. This can be done with the $(command) syntax, or with a special set of  characters used specifically for the purpose.

    Some notes about Linux File System
         The structure of a file system determines its use and the manner in which commands and utilities  interact with it. This is especially true of management commands that change or effect the file system. Beacause of this we need to explore the structure of a Linux file system before we can look at the file  system management commands.

    All Linux filesystem have a similar logical structure as far as the user or system commands are  concerned. This is achieved by the file system driver logic in the Linux Kernel, regardless of the  underlying data layout. "A file system usually consists of a master information table called the  superblock, a list of file summary information blocks, called inodes, and the data blocks assosiated  with your data.

    Every filesystem has its own root directory, which is always identified by inode number 2. This is the  first usable inode in a Linux filesystem. This directory is special, in that it can be used to attach the  filesystem to the main, or root filesystem. The directory on the root or parent filesystem at which the  new filesystem is attached is called the mount point.

    /dev files:
         A /dev entry looks like any other file except that it does not have a size. Instead it has a major and  minor device number, and a block or character designation. The major number identifies which  device driver is being used. There are two kind of device drivers: Block and Character, each with  their own set of major numbers.

    The minor number identifies the sub-device or operation for the device driver. For example, a tape  drive may have different minor numbers for operation in compressed and uncompressed mode. There are a number of general-purpose devices as well. The /dev/bull file is also known as the "bit bucket"  because it will take anything that is written to it and discard it. It is often used to discard unwanted  error messages or to test commands. A similar file is /dev/zero which does the same for writes, but  when read will return as many NUL (Hex 00) characters as you ask to read. This is often used to  create zero-filled files for testing or for database initialization.

    lost+found Directory:
         Every file system requires a directory called lost+found in the root directory of the filesystem. This is used by the system when checking and rebuilding a corrupted file system. Files that have  inodes, but no directory entry, are moved to the lost+found directory. If there are files in this  directory, they will be named with the inode number, as an indication that the file system has suffered  some damage.

    Thursday, July 22, 2010

    HowTo: 10 Steps to Configure tftpboot Server in UNIX / Linux (For installing Linux from Network using PXE)


    In this article, let us discuss about how to setup tftpboot, including installation of necessary packages, and tftpboot configurations.

    TFTP boot service is primarily used to perform OS installation on a remote machine for which you don’t have the physical access. In order to perform the OS installation successfully, there should be a way to reboot the remote server — either using wakeonlan or someone manually rebooting it or some other ways.

    In those scenarios, you can setup the tftpboot services accordingly and the OS installation can be done remotely (you need to have the autoyast configuration file to automate the OS installation steps).

    Step by step procedure is presented in this article for the SLES10-SP3 in 64bit architecture. However, these steps are pretty much similar to any other Linux distributions.

    Required Packages

    The following packages needs to be installed for the tftpboot setup.
    dhcp services packages: dhcp-3.0.7-7.5.20.x86_64.rpm and dhcp-server-3.0.7-7.5.20.x86_64.rpm
    tftpboot package: tftp-0.48-1.6.x86_64.rpm
    pxeboot package: syslinux-3.11-20.14.26.x86_64.rpm

    Package Installation
    Install the packages for the dhcp server services:
    $ rpm -ivh dhcp-3.0.7-7.5.20.x86_64.rpm
    $ rpm -ivh dhcp-server-3.0.7-7.5.20.x86_64.rpm
    $ rpm -ivh tftp-0.48-1.6.x86_64.rpm
    $ rpm -ivh syslinux-3.11-20.14.26.x86_64.rpm

    After installing the syslinux package, pxelinux.0 file will be created under /usr/share/pxelinux/ directory. This is required to load install kernel and initrd images on the client machine.

    Verify that the packages are successfully installed.
    $ rpm -qa | grep dhcp $ rpm -qa | grep tftp

    Download the appropriate tftpserver from the repository of your respective Linux distribution.
     
    Steps to setup tftpboot
    Step 1: Create /tftpboot directory
    Create the tftpboot directory under root directory ( / ) as shown below.
    # mkdir /tftpboot/
    Step 2: Copy the pxelinux image
    PXE Linux image will be available once you installed the syslinux package. Copy this to /tftpboot path as shown below.
    # cp /usr/share/syslinux/pxelinux.0 /tftpboot
    Step 3: Create the mount point for ISO and mount the ISO image
    Let us assume that we are going to install the SLES10 SP3 Linux distribution on a remote server. If you have the SUSE10-SP3 DVD insert it in the drive or mount the ISO image which you have. Here, the iso image has been mounted as follows:
    # mkdir /tftpboot/sles10_sp3
    # mount -o loop SLES-10-SP3-DVD-x86_64.iso /tftpboot/sles10_sp3
    Refer to our earlier article on How to mount and view ISO files.

    Step 4: Copy the vmlinuz and initrd images into /tftpboot
    Copy the initrd to the tftpboot directory as shown below.
    # cd /tftpboot/sles10_sp3/boot/x86_64/loader 
    # cp initrd linux /tftpboot/
    Step 5: Create pxelinux.cfg Directory
    Create the directory pxelinux.cfg under /tftpboot and define the pxe boot definitions for the client.
    # mkdir /tftpboot/pxelinux.cfg 
    # cat >/tftpboot/pxelinux.cfg/default 
    default linux 
    label linux 
    kernel linux 
    append initrd=initrd showopts instmode=nfs install=nfs://192.168.1.101/tftpboot/sles10_sp3/
    The following options are used for,
    kernel – specifies where to find the Linux install kernel on the TFTP server.
    install – specifies boot arguments to pass to the install kernel.

    As per the entries above, the nfs install mode is used for serving install RPMs and configuration files. So, have the nfs setup in this machine with the /tftpboot directory in the exported list. You can add the “autoyast” option with the autoyast configuration file to automate the OS installation steps otherwise you need to do run through the installation steps manually.

    Step 6: Change the owner and permission for /tftpboot directory
    Assign nobody:nobody to /tftpboot directory.
    # chown nobody:nobody /tftpboot 
    # chmod 777 /tftpboot
    Step 7: Modify /etc/dhcpd.conf
    Modify the /etc/dhcpd.conf as shown below.
    # cat /etc/dhcpd.conf 
    ddns-update-style none; 
    default-lease-time 14400; 
    filename "pxelinux.0"; 
    # IP address of the dhcp server nothing but this machine. 
    next-server 192.168.1.101; 
    subnet 192.168.1.0 netmask 255.255.255.0 { 
    # ip distribution range between 192.168.1.1 to 192.168.1.100 
    range 192.168.1.1 192.168.1.100; 
    default-lease-time 10; 
    max-lease-time 10; 
    }
    Specify the interface in /etc/syslinux/dhcpd to listen dhcp requests coming from clients.
    # cat /etc/syslinux/dhcpd | grep DHCPD_INTERFACE DHCPD_INTERFACE=”eth1”;
    Here, this machine has the ip address of 192.168.1.101 on the eth1 device. So, specify eth1 for the DHCPD_INTERFACE as shown above.

    On a related note, refer to our earlier article about 7 examples to configure network interface using ifconfig.

    Step 8: Modify /etc/xinetd.d/tftpModify the /etc/xinetd.d/tftp file to reflect the following. By default the value for disable parameter is “yes”, please make sure you modify it to “no” and you need to change the server_args entry to -s /tftpboot.
    # cat /etc/xinetd.d/tftp 
    service tftp { 
         socket_type = dgram 
         protocol = udp 
         wait = yes 
         user = root 
         server = /usr/sbin/in.tftpd 
         server_args = -s /tftpboot 
         disable = no 
                      }
    Step 9: No changes in /etc/xinetd.conf
    There is no need to modify the etc/xinetd.conf file. Use the default values specified in the xinetd.conf file.

    Step 10: Restart xinetd, dhcpd and nfs services
    Restart these services as shown below.
    # /etc/init.d/xinetd restart 
    # /etc/init.d/dhcpd restart 
    # /etc/init.d/nfsserver restart
    After restarting the nfs services, you can view the exported directory list(/tftpboot) by the following command,
    # showmount -e
    Finally, the tftpboot setup is ready and now the client machine can be booted after changing the first boot device as “network” in the BIOS settings.
    If you encounter any tftp error, you can do the troubleshooting by retrieving some files through tftpd service.

    Retrieve some file from the tftpserver to make sure tftp service is working properly using the tftp client. Let us that assume that sample.txt file is present under /tftpboot directory.
    $ tftp -v 192.168.1.101 -c get sample.txt

    Reference:
    This article is a copy of http://www.thegeekstuff.com/2010/07/tftpboot-server/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed:+TheGeekStuff+(The+Geek+Stuff).

    Monday, December 28, 2009

    What happens when you browse to a web site

    This is a perennial favorite in technical interviews: "so you type 'www.example.com' in your favorite web browser. In as much detail as you can, tell me what happens."
    Let's assume that we do this on a Linux (or other AT&T System V UNIX system). Here's what happens, in enough detail to make your eyes bleed.
    1. Your web browser invokes the gethostbyname() function to turn the hostname you entered into an IP address. (We'll get back to how this happens, exactly, in a moment.)
    2. Your web browser consults the /etc/services file to determine what well-known port HTTP resides on, and finds 80.
    3. Two more pieces of information are determined by Linux so that your browser can initiate a connection: your local IP address, and an ephemeral port number. Combined with the destination (server) IP address and port number, these four pieces of information represent what is called an Internet PCB, or protocol control block. Furthermore, the IANA defines the port range 49,152 through 65,535 for use in this capacity. Exactly how a port number is chosen from this range depends upon the Linux kernel version. The most common allocation algorithm is to simply remember the last-allocated number, and increment it by one each time a new PCB is requested. When 65,535 is reached, the algorithm loops around to 49,152. (This has certain negative security implications, and is addressed in more detail in Port Randomization by Larsen, Ericsson, et al, 2007.) Also see TCP/IP Illustrated, Volume 2: The Implementation by Wright and Stevens, 1995.
    4. Your web browser sends an HTTP GET request to the remote server. Be careful here, as you must remember that your web browser does not speak TCP, nor does it speak IP. It only speaks HTTP. It doesn't care about the transport protocol that gets its HTTP GET request to the server, nor how the server gets its answer back to it.
    5. The HTTP packet passes down the four-layer model that TCP/IP uses, from the application layer where your browser resides to the transport layer. This is a connectionless layer, with addressing based upon URLs, or uniform resource locations.
    6. The transport layer encapsulates the HTTP request inside TCP (transmission control protocol. Transport layer for transmission control, makes sense, right?) The TCP packet is then passed down to the second layer, the network layer. This is a connection-based or persistent layer, with addressing based upon port numbers. TCP does not care about IP addresses, only that some specific port on the client side is bound to a specific port on the server side.
    7. The network layer uses IP (Internet protocol), and adds an IP header to the TCP packet. The packet is then passed down to the first layer, the link layer. This is a connectionless or best-effort layer, with addressing based upon 32-bit IP addresses. Routing, but not switching, occurs at this layer.
    8. The link layer uses the Ethernet protocol. This is a connectionless layer, with addressing based upon 48-bit Ethernet addresses. Switching occurs at this layer.
    9. The kernel must determine what connection over which to send the packet. This happens by taking the IP address and consulting the routing table (seen by running netstat -rn.) First, the kernel attempts to match the destination by host address. (For example, if you have a specific route to just the one host you're trying to reach in your browser.) If this fails, then network address matching is tried. (For example, if you have a specific route to the network in which the host you're trying to reach resides.) Lastly, the kernel searches for a default route entry. This is the most common case.
    10. Now that the kernel knows the next hop, that is, the node that the packet should be handed off to, the kernel must make a physical connection to it. Routing depends upon each node in the chain having a literal electrical connection to the next node; it doesn't matter how many nodes (or hops) the packet must pass through so long as each and every one can "see" its neighbor. This is handled on the link layer, which if you'll recall uses a different addressing scheme than IP addresses. This is where ARP, or the address resolution protocol, comes into play. Let's say your machine is 1.2.3.4, and the default gateway is 1.2.3.5. The kernel will send an ARP broadcast which says, "Who has 1.2.3.5? Tell 1.2.3.4." The default gateway machine will see the ARP request and reply, saying "Hey 1.2.3.4, 1.2.3.5 is 8:0:20:4:3f:2a." The kernel places the answer in the ARP cache, which can be viewed by running arp -a. Now that this information is known, the kernel adds an Ethernet header to the packet, and places it on the wire.
    11. The default gateway receives the packet. First, it checks to see if the Ethernet address matches its own. If it does not, the packet is silently discarded (unless the interface is in promiscuous mode.) Next, it checks to see if the destination IP address matches any of its configured interfaces. In our scenario here, it does not: remember that the packet is being routed to another destination by way of this gateway. So the gateway now checks to see if it is configured to permit IP forwarding. If it is not, the packet is silently discarded. We'll assume the gateway is configured to forward IP, so now it must determine what to do with the packet. It consults its routing table, and attempts to match the destination in the same way our web browser system did a moment ago: exact host match first, then network, then default gateway. Yes, a default gateway server can itself have a default gateway. It also uses ARP in the same way as we saw a moment ago in order to reach the next hop, and pass the packet on to it. Before doing so, however, it decrements the TTL (time-to-live) field in the packet, and if it becomes 1 or 0, discards the packet and sends an ICMP TTL expired in transit message back to the sender. Each hop along the way does the same thing. Also, if the packet came in on the same interface that the gateway's routing table says the packet should go out over to reach the next hop, an ICMP redirect message is sent to the sender, instructing it to bypass this gateway and directly contact the next hop on all subsequent packets. You'll know if this happened because a new route will appear in your web browser machine's routing table.
    12. Each hop passes the packet along, until at the destination the last router notices that it has a direct route to the destination, that is, a routing table entry is matched that is not another router. The packet is then delivered to the destination server.
    13. The destination server notices that at long last the IP address in the packet is its own, that is, it resolves via ARP to the Ethernet address of the server itself. Since it's not a forwarding case, and since the IP address matches, it now examines the TCP portion of the packet to determine the destination port. It also looks at the TCP header flags, and since this is the first packet, observes that only the SYN (synchronize) flag is set. Thus, this first packet is one of three in the TCP handshake process. If the port the packet is addressed to (in our case, port 80) is not bound by a process (for example, if Apache crashed) then an ICMP port unreachable message is sent to the sender and the packet is discarded. If the port is valid, and we'll assume it is, a TCP reply is sent, with both the SYN and ACK (acknowledge) flags set.
    14. The packet passes back through the various routers, and unless source routing is specified, the path back may differ from the path used to first reach the server. The client (the machine running your web browser) receives the packet, notices that it has the SYN and ACK flags set, and contains IP and port information that matches a known PCB. It replies with a TCP packet that has only the ACK flag set.
    15. This packet reaches the server, and the server moves the connection from PENDING to ESTABLISHED. Using the mechanisms of TCP, the server now guarantees data delivery between itself and the client until such time as the connection times out, or is closed by either side. This differs sharply from UDP, where there is no handshake process and packet delivery is not guaranteed, it is only best-effort and left up to the application to figure out if the packets go there or not.
    16. Now that we have a live TCP connection, the HTTP request that started all of this may be sent over the connection to the server for processing. Depending on whether or not the HTTP server (and client) supports such, the reply may consist of only a single object (usually the HTML page) and the connection closed. If persistence is enabled, then the connection is left open for subsequent HTTP requests (for example, all of the page elements, such as images, style sheets, etc.)
    Okay, as I mentioned earlier, we will now address how the client resolves the hostname into an IP address using DNS. All of the above ARP and IP information holds true for the DNS query and replies.
    1. The gethostbyname() function must first determine how it should go about turning a hostname into an IP address. To accomplish this, it consults the /etc/nsswitch.conf file, and looks for a line beginning with hosts:. It then examines the keywords listed, and tries each of them in the order given. For the purposes of this example, we'll assume the pattern to be files dns nis.
    2. The keyword files instructs the kernel to consult the /etc/hosts file. Since the web server we're trying to reach doesn't have an entry there, the match attempt fails. The kernel checks to see if another resolution method exists, and if it does, it tries it.
    3. The next method is dns, so the kernel now consults the /etc/resolv.conf file to determine what DNS server, or name resolver, it should contact.
    4. A UDP request is sent to the first-listed name server, addressed to port 53.
    5. The DNS server receives the request. It examines it to determine if it is authoritative for the requested domain; that is, does it directly serve answers for the domain? If not, then it checks to see if recursion is permitted for the client that sent the request.
    6. If recursion is permitted, then the DNS server consults its hints file (often called named.ca) for the appropriate root DNS server to talk to. It then sends a DNS request to the root server, asking it for the authoritative server for this domain. The root domain server replies with a third DNS server's name, the authoritative DNS server for the domain. This is the server that is listed when you perform a whois on the domain name.
    7. The DNS server now contacts the authoritative DNS server, and asks for the IP address of the given hostname. The answer is cached, and the answer is returned to the client.
    8. If recursion is not supported, then the DNS server simply replies with go away or go talk to a root server. The client is then responsible for carrying on, as follows.
    9. The client receives the negative response, and sends the same DNS request to a root DNS server.
    10. The root DNS server receives the query, and since it is not configured to support recursion, but is a root server, it responds with "go ask so-and-so, that's the authoritative server for that domain." Note that this is not the final answer, but is a definite improvement over a simple go away.
    11. The client now knows who to ask, and sends the original DNS query for a third time to the authoritative server. It replies with the IP address, and the lookup process is complete.
    A few notes on things that I didn't want to clutter up the above narrative with:
    • When a network interface is first brought online, be it during boot or manually by the administrator, something called a gratuitous ARP request is broadcast. It literally asks, "who has 1.2.3.4? Tell 1.2.3.4." This looks redundant at first glance, but it actually serves a dual purpose: it allows neighboring machines to cache the new IP to Ethernet address mapping, and if another machine already has that IP address, it will reply with a typical ARP response of "Hey 1.2.3.4, 1.2.3.4 is 8:0:20:4:3f:2a." The first machine will then log an error message to the console saying "IP address 1.2.3.4 is already in use by 8:0:20:4:3f:2a." This is done to communicate to you that your Excel spreadsheet of IP addresses is wrong and should be replaced with something a bit more accurate and reliable.
    • The Ethernet layer contains a lot more complexities than I detailed above. In particular, because only one machine can be talking over the wire at a time (literally due to electrical limitations) there are various mechanisms in place to prevent collisions. The most widely used is called CSMA/CD, or Carrier Sense Multiple Access with Collision Detection, where each network card is responsible for only transmitting when a wire clear carrier signal is present. Also, should two cards start transmitting at the exact same instant, all cards are responsible for detecting the collision and reporting it to the responsible cards. They then must stop transmitting and wait a random time interval before trying again. This is the main reason for network segmentation; the more hosts you have on a single wire, the more collisions you'll get; and the more collisions you get, the slower the overall network becomes.