Thursday, August 20, 2009

Solaris Service Management Facility Utilities & Description

svcs

# svcs -a
: Lists all services currently installed, including their state.

# svcs -d FMRI

: Lists dependencies for FMRI.

# svcs -D FMRI

: Lists dependents for FMRI.

# svcs -l FMRI

: Provides a long listing of information about FMRI; includes
dependency information


# svcs -p FMRI

: Shows relationships between services and processes.

# svcs -t

: This change is temporary (does not persist past a boot).

# svcs -x

: Explains why a service is not available.

# svcs -xv

: Verbose debugging information.


svcadm
# /usr/sbin/svcadm -v enable [-rst] {FMRI | pattern}
:Enables the service instances specified

# /usr/sbin/svcadm -v disable [-st] {FMRI | pattern}
: Disables the service instance specified by the operands

# /usr/sbin/svcadm -v restart {FMRI | pattern}
: Requests that the service instances specified by the operands be restarted.

# /usr/sbin/svcadm -v refresh {FMRI | pattern}
: For each service instance specified by the operands,requests that the assigned restarter update the
service's running configuration snapshot with the values from the current configuration.

# /usr/sbin/svcadm -v clear {FMRI | pattern}
: For each service instance specified by the operands, if the instance is in the maintenance state, signal to the
assigned restarter that the service has been repaired.If the instance is in the degraded state, request that
the assigned restarter take the service to the online state.

# /usr/sbin/svcadm -v mark [-It] instance_state {FMRI | pattern}
: If instance_state is "maintenance", then for each service specified by the operands, svcadm requests that the
assigned restarter place the service in the maintenancestate.
Eg:- svcadm -v mark -t maintenance telnet
marks the telnet service as in maintenance mode
: If instance_state is "degraded", then for services specified by the operands in the online state, svcadm
requests that the restarters assigned to the services move them into the degraded state.

# /usr/sbin/svcadm [-v] milestone [-d] milestone_FMRI
Eg:-
1. The following command restricts the running services to single user mode:

# svcadm milestone milestone/single-user

2. The following command restores the running services:

# svcadm milestone all

Options:
enable [-rst]
-r : svcadm enables each service instance and recursively enables its dependencies
-s : svcadm enables each service instance and then waits for each service instance
to enter the online or degraded state. svcadm will return early if it determines that the service cannot
reach these states without administrator intervention.
-t : svcadm temporarily enables each service instance. Temporary enable only
lasts until reboot
disable [-st]
-s : svcadm disables each service instance and then waits for each service instance
to enter the disabled state. svcadm will return early if it determines that the service cannot reach this state
without administrator intervention
-t : option is specified, svcadm temporarily disables each service instance. Temporary disable only
lasts until reboot.

mark [-It]
-I : option is specified, the request is flagged as immediate
-t : Perform temporarily


Inetadm

# inetadm
The inetadm utility provides the following capabilities for
inetd-managed SMF services:

o Provides a list of all such services installed.

o Lists the services' properties and values.

o Allows enabling and disabling of services.

o Allows modification of the services'property
values, as well as the default values provided by
inetd.
Options:

-p
Lists all default inet service property values provided
by inetd in the form of name=value pairs. If the value
is of boolean type, it is listed as TRUE or FALSE.

-l {FMRI | pattern}...

List all properties for the specified service instances
as name=value pairs. In addition, if the property value
is inherited from the default value provided by inetd,
the name=value pair is identified by the token
(default). Property inheritance occurs when properties
do not have a specified service instance default.

-e {FMRI | pattern}...

Enable the specified service instances.

-d {FMRI | pattern}...

Disable the specified service instances.

-m {FMRI | pattern}...{name=value}...

Change the values of the specified properties of the
identified service instances. Properties are specified
as whitespace-separated name=value pairs. To remove an
instance-specific value and accept the default value for
a property, simply specify the property without a value,
for example, name= .

-M {name=value}...

Change the values Globally (default properties).
Properties are specified as whitespace-separated
name=value pairs.
Examples:
# inetadm -l network/rpc/spray:default

inetconv

The inetconv utility converts a file containing records of inetd.conf into smf service
manifests, and then import those manifests into the smf repository. Once the inetd.conf file has been converted,
the only way to change aspects of an inet service is to use the inetadm utility.

Example:

Adding the Swat to inetadmin
create a file containing a valid inted.conf-style entry for swat

#
vi example /inet.swat
--------------------
swat stream tcp nowait root /usr/sfw/sbin/swat swat
--------------------

Now run inetconv as follows:-

# inetconv -i /inet.swat

swat -> /var/svc/manifest/network/swat-tcp.xml
Importing swat-tcp.xml ...Done

Now swat can be enabled (Note the service name):-

# inetadm -e
svc:/network/swat/tcp:default

Shell script to get uptime, disk , cpu , RAM , system load, from multiple Linux servers – output the information on a single server in html format

http://bash.cyberciti.biz/monitoring/get-system-information-in-html-format/

Managing CPU Utilization

TOP COMMAND
#
top

You can see Linux CPU utilization under CPU stats. The task’s share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. In a true SMP environment (multiple CPUS), top will operate in number of CPUs. Please note that you need to type q key to exit the top command display.

The top command produces a frequently-updated list of processes. By default, the processes are ordered by percentage of CPU usage, with only the "top" CPU consumers shown. The top command shows how much processing power and memory are being used, as well as other information about the running processes.

# sysstat

# mpstat

If you are using SMP (Multiple CPU) system, use mpstat command to display the utilization of each CPU individually. It report processors related statistics. For example, type command:

# mpstat

-----------------------------

Linux 2.6.15.4 (debian)         Thursday 06 April 2006

05:13:05 IST CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
05:13:05 IST all 16.52 0.00 2.87 1.09 0.07 0.02 0.00 79.42 830.06
-----------------------------

# mpstat -P ALL
Shows all the CPU's processing details.


CPU utilization using sar command

You can display today’s CPU activity, with sar command:

# sar

-----------------------------

Linux 2.6.9-42.0.3.ELsmp (dellbox.xyz.co.in)         01/13/2007

12:00:02 AM CPU %user %nice %system %iowait %idle
12:10:01 AM all 1.05 0.00 0.28 0.04 98.64
12:20:01 AM all 0.74 0.00 0.34 0.38 98.54
12:30:02 AM all 1.09 0.00 0.28 0.10 98.53
12:40:01 AM all 0.76 0.00 0.21 0.03 99.00
12:50:01 AM all 1.25 0.00 0.32 0.03 98.40
01:00:01 AM all 0.80 0.00 0.24 0.03 98.92
...
.....
..
04:40:01 AM all 8.39 0.00 33.17 0.06 58.38
04:50:01 AM all 8.68 0.00 37.51 0.04 53.78
05:00:01 AM all 7.10 0.00 30.48 0.04 62.39
05:10:01 AM all 8.78 0.00 37.74 0.03 53.44
05:20:02 AM all 8.30 0.00 35.45 0.06 56.18
Average: all 3.09 0.00 9.14 0.09 87.68

-----------------------------
# sar -u 2 5

Where,

  • -u 12 5 : Report CPU utilization. The following values are displayed:
    • %user: Percentage of CPU utilization that occurred while executing at the user level (application).
    • %nice: Percentage of CPU utilization that occurred while executing at the user level with nice priority.
    • %system: Percentage of CPU utilization that occurred while executing at the system level (kernel).
    • %iowait: Percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request.
    • %idle: Percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request.
To get multiple samples and multiple reports set an output file for the sar command. Run the sar command as a background process using.

# sar -o output.file 12 8 >/dev/null 2>&1 &

Better use nohup command so that you can logout and check back report later on:

# nohup sar -o output.file 12 8 >/dev/null 2>&1 &

All data is captured in binary form and saved to a file (data.file). The data can then be selectively displayed ith the sar command using the -f option.

# sar -f data.file

Find out who is monopolizing or eating the CPUs

Finally, you need to determine which process is monopolizing or eating the CPUs. Following command will displays the top 10 CPU users on the Linux system.


# ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
OR
# ps -eo pcpu,pid,user,args | sort -r -k1 | less

-----------------------------

%CPU   PID USER     COMMAND
96 2148 kiran /usr/lib/vmware/bin/vmware-vmx -C /var/lib/vmware/Virtual Machines/Ubuntu 64-bit/Ubuntu 64-bit.vmx -@ ""
0.7 3358 mysql /usr/libexec/mysqld --defaults-file=/etc/my.cnf --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-locking --socket=/var/lib/mysql/mysql.sock
0.4 29129 lighttpd /usr/bin/php
0.4 29128 lighttpd /usr/bin/php
0.4 29127 lighttpd /usr/bin/php
0.4 29126 lighttpd /usr/bin/php
0.2 2177 vivek [vmware-rtc]
0.0 9 root [kacpid]
0.0 8 root [khelper]

-----------------------------


Monday, August 17, 2009

Extending a logical volume

To extend a logical volume you simply tell the lvextend command how much you want to increase the size. You can specify how much to grow the volume, or how large you want it to grow to:

# lvextend -L12G /dev/myvg/homevol

lvextend -- extending logical volume "/dev/myvg/homevol" to 12 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/homevol" successfully extended

will extend /dev/myvg/homevol to 12 Gigabytes.

# lvextend -L+1G /dev/myvg/homevol

lvextend -- extending logical volume "/dev/myvg/homevol" to 13 GB
lvextend -- doing automatic backup of volume group "myvg"
lvextend -- logical volume "/dev/myvg/homevol" successfully extended

will add another gigabyte to /dev/myvg/homevol.

After you have extended the logical volume it is necessary to increase the file system size to match. how you do this depends on the file system you are using.

By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you don't need to worry about specifying the same size for each of the two commands.

  1. ext2/ext3

    Unless you have patched your kernel with the ext2online patch it is necessary to unmount the file system before resizing it. (It seems that the online resizing patch is rather dangerous, so use at your own risk)

     # umount /dev/myvg/homevol/dev/myvg/homevol
    # resize2fs /dev/myvg/homevol
    # mount /dev/myvg/homevol /home

    If you don't have e2fsprogs 1.19 or later, you can download the ext2resize command from ext2resize.sourceforge.net and use that:

     # umount /dev/myvg/homevol/dev/myvg/homevol
    # ext2resize /dev/myvg/homevol
    # mount /dev/myvg/homevol /home

    For ext2 there is an easier way. LVM 1 ships with a utility called e2fsadm which does the lvextend and resize2fs for you (it can also do file system shrinking, see the next section).

    WarningLVM 2 Caveat

    There is currently no e2fsadm equivalent for LVM 2 and the e2fsadm that ships with LVM 1 does not work with LVM 2.

    so the single command
       # e2fsadm -L+1G /dev/myvg/homevol
    is equivalent to the two commands:
     # lvextend -L+1G /dev/myvg/homevol
    # resize2fs /dev/myvg/homevol

    NoteNote

    You will still need to unmount the file system before running e2fsadm.

  2. reiserfs

    Reiserfs file systems can be resized when mounted or unmounted as you prefer:

    • Online:

         # resize_reiserfs -f /dev/myvg/homevol

    • Offline:

      # umount /dev/myvg/homevol
      # resize_reiserfs /dev/myvg/homevol
      # mount -treiserfs /dev/myvg/homevol /home

  3. xfs

    XFS file systems must be mounted to be resized and the mount-point is specified rather than the device name.

       # xfs_growfs /home

  4. jfs

    Just like XFS the JFS file system must be mounted to be resized and the mount-point is specified rather than the device name. You need at least Version 1.0.21 of the jfs-utils to do this.

    # mount -o remount,resize /home

    WarningKnown Kernel Bug

    Some kernel versions have problems with this syntax (2.6.0 is known to have this problem). In this case you have to explicitly specify the new size of the filesystem in blocks. This is extremely error prone as you must know the blocksize of your filesystem and calculate the new size based on those units.

    Example: If you were to resize a JFS file system to 4 gigabytes that has 4k blocks, you would write:

    # mount -o remount,resize=1048576 /home

Reducing a logical volume

Logical volumes can be reduced in size as well as increased. However, it is very important to remember to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise you risk losing data.

  1. ext2
    If you are using LVM 1 with ext2 as the file system then you can use the e2fsadm command mentioned earlier to take care of both the file system and volume resizing as follows:


    # umount /home
    # e2fsadm -L-1G /dev/myvg/homevol
    # mount /home
    
    WarningLVM 2 Caveat
    There is currently no e2fsadm equivalent for LVM 2 and the e2fsadm that ships with LVM 1 does not work with LVM 2.
    If you prefer to do this manually you must know the new size of the volume in blocks and use the following commands:


    # umount /home
    # resize2fs /dev/myvg/homevol 524288
    # lvreduce -L-1G /dev/myvg/homevol
    # mount /home
    

  2. reiserfs
    Reiserfs seems to prefer to be unmounted when shrinking


    # umount /home
    # resize_reiserfs -s-1G /dev/myvg/homevol
    # lvreduce -L-1G /dev/myvg/homevol
    # mount -treiserfs /dev/myvg/homevol /home
    

  3. xfs
    There is no way to shrink XFS file systems.

  4. jfs
    There is no way to shrink JFS file systems.


    Differences between LVM1 and LVM2

The new release of LVM, LVM 2, is available only on Red Hat Enterprise Linux 4 and later kernels. It is upwardly compatible with LVM 1 and retains the same command line interface structure. However it uses a new, more scalable and resilient metadata structure that allows for transactional metadata updates (that allow quick recovery after server failures), very large numbers of devices, and clustering. For Enterprise Linux servers deployed in mission-critical environments that require high availability, LVM2 is the right choice for Linux volume management. Table 1. A comparison of LVM 1 and LVM 2 summarizes the differences between LVM1 and LVM2 in features, kernel support, and other areas.

Features LVM1 LVM2
RHEL AS 2.1 support No No
RHEL 3 support Yes No
RHEL 4 support No Yes
Transactional metadata for fast recovery No Yes
Shared volume mounts with GFS No Yes
Cluster Suite failover supported Yes Yes
Striped volume expansion No Yes
Max number PVs, LVs 256 PVs, 256 LVs 2**32 PVs, 2**32 LVs
Max device size 2 Terabytes 8 Exabytes (64-bit CPUs)
Volume mirroring support No Yes, in Fall 2005

Table 1. A comparison of LVM 1 and LVM 2