Friday, August 7, 2009

SHELL Script Variables

Variable Meaning
$0 Filename of script
$1 Positional parameter #1
$2 - $9 Positional parameters #2 - #9
${10} Positional parameter #10
$# Number of positional parameters
"$*" All the positional parameters (as a single word) *
"$@" All the positional parameters (as separate strings)
${#*} Number of command-line parameters passed to script
${#@} Number of command-line parameters passed to script
$? Return value
$$ Process ID (PID) of script
$- Flags passed to script (using set)
$_ Last argument of previous command
$! Process ID (PID) of last job run in background

Disabling the beep in terminal

edit /etc/inputrc
---------------------
set bell-style none
---------------------

Linux File system

EXT
The extended file system or ext was invented in April 1992 and the first file system created specifically for the Linux operating system. It was designed by Rémy Card to overcome certain limitations of the Minix file system. It is the first of the extended file systems. It was superseded by both ext2 and xiafs, between which there was a competition, which ext2 won because of its long-term viability.

EXT2
The ext2 or second extended filesystem is a file system for the Linux kernel. It was initially designed by Rémy Card as a replacement for the extended file system (ext).
ext2 was the default filesystem in several Linux distributions, including Debian and Red Hat Linux, until supplanted more recently by ext3, which is almost completely compatible with ext2 and is a journaling file system. ext2 is still the filesystem of choice for flash-based storage media (such as SD cards, SSDs, and USB flash drives) since its lack of a journal minimizes the number of writes. Flash devices have only a limited number of write cycles.

FileSystem Limits:

The reason for some limits of the ext2-file system are the file format of the data and the operating system's kernel. Mostly these factors will be determined once when the file system is built. They depend on the block size and the ratio of the number of blocks and inodes. In Linux the block size is limited by the architecture page size. There are also many userspace programs that can't handle files larger than 2 GB. The limit of sublevel-directories is about 32768. If the number of files in a directory exceeds 10000 to 15000 files, the user will normally be warned that operations can last for a long time unless directory indexing is enabled. The theoretical limit on the number of files in a directory is 1.3 × 1020, although this is not relevant for practical situations.
Block size: 1 KiB 2 KiB 4 KiB 8 KiB
max. file size: 16 GiB 256 GiB 4 TiB 64 TiB
max. filesystem size: 4* TiB 8 TiB 16 TiB 32 TiB








EXT3
The ext3 or third extended filesystem is a journaled file system that is commonly used by the Linux kernel. It is the default file system for many popular Linux distributions. Stephen Tweedie first revealed that he was working on extending ext2 in Journaling the Linux ext2fs Filesystem in 1998[2] paper and later in a February 1999 kernel mailing list posting[3] and the filesystem was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward.[4] Its main advantage over ext2 is journaling which improves reliability and eliminates the need to check the file system after an unclean shutdown. Its successor is ext4.

Advantages:

Although its performance (speed) is less attractive than competing Linux filesystems such as JFS, ReiserFS and XFS, it has a significant advantage in that it allows in-place upgrades from the ext2 file system without having to back up and restore data. Ext3 also uses less CPU power than ReiserFS and XFS.[5] It is also considered safer than the other Linux file systems due to its relative simplicity and wider testing base.

The ext3 file system adds, over its predecessor:

  • A Journaling file system
  • Online file system growth
  • Htree indexing for larger directories. An HTree is a specialized version of a B-tree
Without these, any ext3 file system is also a valid ext2 file system.
This has allowed well-tested and mature file system maintenance utilities for maintaining and repairing ext2 file systems to also be used with ext3 without major changes. The ext2 and ext3 file systems share the same standard set of utilities, e2fsprogs, which includes a fsck tool. The close relationship also makes conversion between the two file systems (both forward to ext3 and backward to ext2) straightforward.
While in some contexts the lack of "modern" filesystem features such as dynamic inode allocation and extents could be considered a disadvantage

Journaling Levels:
There are three levels of journaling available in the Linux implementation of ext3:

Journal (lowest risk):

Both metadata and file contents are written to the journal before being committed to the main file system. Because the journal is relatively continuous on disk, this can improve performance in some circumstances. In other cases, performance gets worse because the data must be written twice - once to the journal, and once to the main part of the filesystem.[8]

Ordered (medium risk)
:
Only metadata is journaled; file contents are not, but it's guaranteed that file contents are written to disk before associated metadata is marked as committed in the journal. This is the default on many Linux distributions. If there is a power outage or kernel panic while a file is being written or appended to, the journal will indicate the new file or appended data has not been "committed", so it will be purged by the cleanup process. (Thus appends and new files have the same level of integrity protection as the "journaled" level.) However, files being overwritten can be corrupted because the original version of the file is not stored. Thus it's possible to end up with a file in an intermediate state between new and old, without enough information to restore either one or the other (the new data never made it to disk completely, and the old data is not stored anywhere). Even worse, the intermediate state might intersperse old and new data, because the order of the write is left up to the disk's hardware.

Writeback (highest risk):
Only metadata is journaled; file contents are not. The contents might be written before or after the journal is updated. As a result, files modified right before a crash can become corrupted. For example, a file being appended to may be marked in the journal as being larger than it actually is, causing garbage at the end. Older versions of files could also appear unexpectedly after a journal recovery. The lack of synchronization between data and journal is faster in many cases. XFS and JFS use this level of journaling, but ensure that any "garbage" due to unwritten data is zeroed out on reboot.

Size Limit

Block size Max file size Max filesystem size
1 KiB 16 GiB <2 href="http://en.wikipedia.org/wiki/Tebibyte" title="Tebibyte">TiB
2 KiB 256 GiB <4 href="http://en.wikipedia.org/wiki/Tebibyte" title="Tebibyte">TiB
4 KiB 2 TiB <8 href="http://en.wikipedia.org/wiki/Tebibyte" title="Tebibyte">TiB
8 KiB 2 TiB <16 href="http://en.wikipedia.org/wiki/Tebibyte" title="Tebibyte">TiB

EXT4

The ext4 or fourth extended filesystem is a journaling file system developed as the successor to ext3. It was born as a series of backward compatible extensions to add 64-bit storage limits and other performance improvements to ext3.[1] However, other Linux kernel developers opposed accepting extensions to ext3 for stability reasons.

Features:
1. Large File system
The ext4 filesystem can support volumes with sizes up to 1 exbibyte and files with sizes up to 16 tebibytes.

2. Extents
Extents are introduced to replace the traditional block mapping scheme used by ext2/3 filesystems. An extent is a range of contiguous physical blocks, improving large file performance and reducing fragmentation. A single extent in ext4 can map up to 128MiB of contiguous space with a 4KiB block size.[1] There can be 4 extents stored in the Inode. When there are more than 4 extents to a file, the rest of the extents are indexed in an Htree.

3. Backward Compatibility
The ext4 filesystem is backward compatible with ext3 and ext2, making it possible to mount ext3 and ext2 filesystems as ext4. This will already slightly improve performance, because certain new features of ext4 can also be used with ext3 and ext2, such as the new block allocation algorithm. The ext3 file system is partially forward compatible with ext4, that is, an ext4 filesystem can be mounted as an ext3 partition (using "ext3" as the filesystem type when mounting). However, if the ext4 partition uses extents (a major new feature of ext4), then the ability to mount the file system as ext3 is lost.

4.Persistent pre-allocation
The ext4 filesystem allows for pre-allocation of on-disk space for a file. The current methodology for this on most file systems is to write the file full of 0s to reserve the space when the file is created. This method would no longer be required for ext4; instead, a new fallocate() system call was added to the linux kernel for use by filesystems, including ext4 and XFS, that have this capability. The space allocated for files such as these would be guaranteed and would likely be contiguous. This has applications for media streaming and databases.

5. Delayed allocation
Ext4 uses a filesystem performance technique called allocate-on-flush, also known as delayed allocation. It consists of delaying block allocation until the data is going to be written to the disk, unlike some other file systems, which may allocate the necessary blocks before that step. This improves performance and reduces fragmentation by improving block allocation decisions based on the actual file size.

6.Break 32,000 subdirectory limit
In ext3 the number of subdirectories that a directory can contain is limited to 32,000. This limit has been raised to 64,000 in ext4, and with the "dir_nlink" feature it can go beyond this (although it will stop increasing the link count on the parent). To allow for continued performance given the possibility of much larger directories, Htree indexes (a specialized version of a B-tree) are turned on by default in ext4. This feature is implemented in Linux kernel 2.6.23. Htree is also available in ext3 when the dir_index feature is enabled.

7.Journal checksumming
Ext4 uses checksums in the journal to improve reliability, since the journal is one of the most used files of the disk. This feature has a side benefit; it can safely avoid a disk I/O wait during the journaling process, improving performance slightly. The technique of journal checksumming was inspired by a research paper from the University of Wisconsin titled IRON File Systems (specifically, section 6, called "transaction checksums").

8. Online defragmentation
There are a number of proposals for an online defragmenter, but that support is not yet included in the mainline kernel. Even with the various techniques used to avoid fragmentation, a long lived file system does tend to become fragmented over time. Ext4 will have a tool which can defragment individual files or entire file systems.

9.Faster file system checking In ext4, unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely on a check and greatly reduce the time it takes to check a file system of the size ext4 is built to support. This feature is implemented in version 2.6.24 of the Linux kernel.

10.Multiblock allocator
Ext4 allocates multiple blocks for a file in a single operation, which reduces fragmentation by attempting to choose contiguous blocks on the disk. The multiblock allocator is active when using O_DIRECT or if delayed allocation is on. This allows the file to have many dirty blocks submitted for writes at the same time, unlike the existing kernel mechanism of submitting each block to the filesystem separately for allocation.

11.Improved Timestamps
As computers become faster in general and as Linux becomes used more for mission critical applications, the granularity of second-based timestamps becomes insufficient. To solve this, ext4 provides timestamps measured in nanoseconds. In addition, 2 bits of the expanded timestamp field are added to the most significant bits of the seconds field of the timestamps to defer the year 2038 problem for approximately an additional 204 years. Ext4 also adds support for date-created timestamps. But, as Theodore Ts'o points out, while it is easy to add an extra creation-date field in the inode (thus technically enabling support for date-created timestamps in ext4), it is more difficult to modify or add the necessary system calls, like stat() (which would probably require a new version), and the various libraries that depend on them (like glibc). These changes would require coordination of many projects. So, even if ext4 developers implement initial support for creation-date timestamps, this feature will not be available to user programs for now.

Disadvantages
Delayed allocation and potential data loss Delayed allocation poses some additional risk of data loss in cases where the system crashes before all of the data has been written to the disk. The typical scenario in which this might occur is a program replacing the contents of a file without forcing a write to the disk with fsync. Problems can arise if the system crashes before the actual write occurs. In this situation, users of ext3 have come to expect that the disk will hold either the old version or the new version of the file following the crash. However, the ext4 code in the Linux kernel version 2.6.28 will often clear the contents of the file before the crash, but never write the new version, thus losing the contents of the file entirely. Many people[who?] find such a behavior unacceptable. A significant issue is that fixing the bug by using fsync more often could lead to severe performance penalties on ext3 filesystems mounted with the data=ordered flag (the default on most Linux distributions). Given that both file-systems will be in use for some time, this complicates matters enormously for end-user application developers. In response, Theodore Ts'o has written some patches for ext4 that cause it to limit its delayed allocation in these common cases. For a small cost in performance, this will significantly increase the chance that either version of the file will survive the crash. The new patches are expected to become part of the mainline kernel 2.6.30. Various distributions may choose to backport them to 2.6.28 or 2.6.29, for instance Ubuntu made them part of the 2.6.28 kernel in version 9.04—Jaunty Jackalope.

Understanding Linux CPU Load

You might be familiar with Linux load averages already. Load averages are the three numbers shown with the uptime and top commands - they look like this
- load average: 0.09, 0.05, 0.01
Most people have an inkling of what the load averages mean: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages)and that lower numbers are better. Higher numbers represent a problem or an overloaded machine. But, what's the the threshold? What constitutes "good" and "bad" load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP?

First, a little background on what the load average values mean. We'll start out with the simplest case: a machine with one single-core processor.

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator ... sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they're in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:

  • 0.00 means there's no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there's no backup, and an arriving car will just go right on.
  • 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
  • over 1.00 means there's backup. How much? Well, 2.00 means that there are two lanes worth of cars total -- one lane's worth on the bridge, and one lane's worth waiting. 3.00 means there are three lane's worth total -- one lane's worth on the bridge, and two lanes' worth waiting. Etc.

= load of 1.00

= load of 0.50

= load of 1.70



This is basically what CPU load is. "Cars" are processes using a slice of CPU time ("crossing the bridge") or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.

Like the bridge operator, you'd like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also like the bridge operator, you are still ok if you get some

temporary spikes above 1.00 ... but when you're consistently above 1.00, you need to worry.

So you're saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:

  • The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.

  • The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.

  • The "Arrgh, it's 3AM WTF?" Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you're presenting at a conference. Don't let it get there.

.
What about Multi-processors?

My load says 3.00, but things are running fine! Got a quad-processor system? It's still healthy with a load of 3.00. On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

If we go back to the bridge analogy, the "1.00" really means "one lane's worth of traffic". On a one-lane bridge, that means it's filled up. On a two-late bridge, a load of 1.00 means its at 50% capacity -- only one lane is full, so there's another whole lane that can be filled.

= load of 2.00 on two-lane road

Same with CPUs: a load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a

load of 2.00 is 100% CPU utilization.
Multicore vs. multiprocessor
While we're on the topic, let's talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across

Bringing It Home

Let's take a look at the load averages output from uptime:
$ uptime
23:05 up 14 days, 6:08, 7 users, load averages: 0.65 0.42 0.36

This is on a dual-core CPU, so we've got lots of headroom. I won't even think about it until load gets and stays above 1.7 or so. Now, what about those three numbers? 0.65 is the average over the last minute, 0.42 is the average over the last five minutes, and 0.36 is the average over the last 15 minutes.

Which brings us to the question: Which average should I be observing? One, five, or 15 minute?
For the numbers we've talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you're still fine. It's when the 15-minute average goes north of 1.0 and stays there that you need to snap to. (obviously, as we've learned, adjust these numbers to the number of processor cores your system has).
So # of cores is important to interpreting load averages ... how do I know how many cores my system has?
#cat /proc/cpuinfo
to get info on each processor in your system.
# grep 'model name' /proc/cpuinfo | wc -l

Linux Boot Process - Incomplete

POST:(Step1)
When an x86 computer is booted, the processor looks at the end of the system memory for the BIOS (Basic Input/Output System) and runs it. The BIOS program is written into permanent read-only memory and is always available for use. The BIOS provides the lowest level interface to peripheral devices and controls the first step of the boot process.

BIOS:(Step2)
The BIOS tests the system, looks for and checks peripherals, and then looks for a drive to use to boot the system. Usually it checks the floppy drive (or CD-ROM drive on many newer systems) for bootable media, if present, and then it looks to the hard drive. The order of the drives used for booting is usually controlled by a particular BIOS setting on the system. Once Linux is installed on the hard drive of a system, the BIOS looks for a Master Boot Record (MBR) starting at the first sector on the first hard drive, loads its contents into memory, then passes control to it.

MBR:(Step3)
This first sector is called the Master Boot Record (also known as the partition table, or master boot block). At the start of this sector is a small program. This program uses the partition information (or partition table) stored at the end of the sector to determine which partition is bootable, and then attempts to boot from it.
This MBR contains instructions on how to load the GRUB (or LILO) boot-loader, using a pre-selected operating system. The MBR then loads the boot-loader, which takes over the process (if the boot-loader is installed in the MBR). In the default Red Hat Linux configuration, GRUB uses the settings in the MBR to display boot options in a menu. Once GRUB has received the correct instructions for the operating system to start, either from its command line or configuration file, it finds the necessary boot file and hands off control of the machine to that operating system.

GRUB:(Step4)
GRUB is independent of any particular operating system and may be thought of as a tiny, function-specific OS. The purpose of the GRUB kernel is to recognize filesystems and load boot images, and it provides both menu-driven and command-line interfaces to perform these functions. The command-line interface in particular is quite flexible and powerful, with command history and completion features familiar to users of the bash shell.

What is initrd ?

The initial ramdisk, or initrd is a temporary file system commonly used in the boot process of the Linux kernel. It is typically used for making preparations before the real root file system can be mounted.
------------------
INITRD is initial ramdisk. Basically it is a chicken before the egg solution.

When you boot your computer and the bios starts to load the disk, it has to uncompress the kernel from disk and start to load drivers. Problem was, what if you needed drivers to access the disk? How would one read what is on disk, if you couldn't access the disk without the drivers. A good example is RAID. In certain raid setups, data is striped across multiple disks. Part of the data on one, part on the other. If you don't have the driver to tell the system how raid works, then it will try to access the disk like normal. When it does, it finds the disk is missing half of the striped data. So now with an initrd, we carve out a chunk of ram and make it into a filesystem, and INITRD puts what drivers we need to access the disks there. Using that info we can now access the drives and load the kernel....
------------------

What is vmlinuz?

vmlinuz is a compressed Linux kernel, and it is bootable. Bootable means that it is capable of loading the operating system into memory so that the computer becomes usable and application programs can be run.
vmlinuz should not be confused with vmlinux, which is the kernel in a non-compressed and non-bootable form. vmlinux is generally just an intermediate step to producing vmlinuz.
vmlinuz is not merely a compressed image. It also has gzip decompressor code built into it. gzip is one of the most popular compression utilities on Unix-like operating systems.

Thursday, August 6, 2009

Linux / UNIX Find Out What Program / Service is Listening on a Specific TCP Port

Q. How do I find out which service is listening on a specific port? How do I find out what program is listening on a specific TCP Port?

A.
Under Linux and UNIX you can use any one of the following command to get listing on a specific TCP port:
=> lsof : list open files including ports.

=> netstat : The netstat command symbolically displays the contents of various network-related data and information.

lsof command example :

Type the following command to see IPv4 port(s), enter:

# lsof -Pnl +M -i4
Type the following command to see IPv6 listing port(s), enter:
# lsof -Pnl +M -i6
-----------------------------------------------------------

COMMAND    PID     USER   FD   TYPE DEVICE SIZE NODE NAME
gweather- 6591 1000 17u IPv4 106812 TCP 192.168.1.100:57179->140.90.128.70:80 (ESTABLISHED)
firefox-b 6613 1000 29u IPv4 106268 TCP 127.0.0.1:60439->127.0.0.1:3128 (ESTABLISHED)
firefox-b 6613 1000 31u IPv4 106321 TCP 127.0.0.1:60440->127.0.0.1:3128 (ESTABLISHED)
firefox-b 6613 1000 44u IPv4 106325 TCP 127.0.0.1:60441->127.0.0.1:3128 (ESTABLISHED)
firefox-b 6613 1000 50u IPv4 106201 TCP 127.0.0.1:60437->127.0.0.1:3128 (ESTABLISHED)
deluge 6908 1000 8u IPv4 23179 TCP *:6881 (LISTEN)
deluge 6908 1000 30u IPv4 23185 UDP *:6881
deluge 6908 1000 45u IPv4 106740 TCP 192.168.1.100:50584->217.169.223.161:38406 (SYN_SENT)
deluge 6908 1000 57u IPv4 70529 TCP 192.168.1.100:57325->24.67.82.222:21250 (ESTABLISHED)
deluge 6908 1000 58u IPv4 106105 TCP 192.168.1.100:38073->24.16.233.1:48479 (ESTABLISHED)
..........
......
ssh 6917 1000 3u IPv4 23430 TCP 10.1.11.3:42658->10.10.29.66:22 (ESTABLISHED)

-----------------------------------------------------------

First column COMMAND - gives out information about program name. Please see output header for details. For example, gweather* command gets the weather report weather information from the U.S National Weather Service (NWS) servers (140.90.128.70), including the Interactive Weather Information Network (IWIN) and other weather services.
Where,

  1. -P : This option inhibits the conversion of port numbers to port names for network files. Inhibiting the conver-
    sion may make lsof run a little faster. It is also useful when port name lookup is not working properly.
  2. -n : This option inhibits the conversion of network numbers to host names for network files. Inhibiting conversion may make lsof run faster. It is also useful when host name lookup is not working properly.
  3. -l : This option inhibits the conversion of user ID numbers to login names. It is also useful when login name lookup is working improperly or slowly.
  4. +M : Enables the reporting of portmapper registrations for local TCP and UDP ports.
  5. -i4 : IPv4 listing only
  6. -i6 : IPv6 listing only

netstat command example:


# netstat -tulpn
OR
# netstat -npl
Output:

-----------------------------------------------------------

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp 0 0 0.0.0.0:6881 0.0.0.0:* LISTEN 6908/python
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 5562/cupsd
tcp 0 0 127.0.0.1:3128 0.0.0.0:* LISTEN 6278/(squid)
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 5854/exim4
udp 0 0 0.0.0.0:32769 0.0.0.0:* 6278/(squid)
udp 0 0 0.0.0.0:3130 0.0.0.0:* 6278/(squid)
udp 0 0 0.0.0.0:68 0.0.0.0:* 4583/dhclient3
udp 0 0 0.0.0.0:6881 0.0.0.0:* 6908/python

-----------------------------------------------------------

Last column PID/Program name gives out information regarding program name and port.
Where,

  • -t : TCP port
  • -u : UDP port
  • -l : Show only listening sockets.
  • -p : Show the PID and name of the program to which each socket / port belongs
  • -n : No DNS lookup (speed up operation)

/etc/services file:

/etc/services is a plain ASCII file providing a mapping between friendly textual names for internet services, and their underlying assigned port numbers and protocol types. Every networking program should look into this file to get the port number (and protocol) for its service. You can view this file with the help of cat or less command:
$ cat /etc/services
$ grep 110 /etc/services
$ less /etc/services

Linux Online book - configuration examples

http://www.linuxtopia.org/online_books/index.html

Monday, August 3, 2009

Solaris Blog tips & tricks

http://www.camelrichard.org

Default Route Change Without Reboot: Solaris

View the routing table:

# netstat -rn

----------------------
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ------ ---------
10.67.13.0 10.67.13.50 U 1 42 hme0
10.67.3.0 10.67.3.65 U 1 9172 hme1
224.0.0.0 10.67.13.50 U 1 0 hme0
default 10.67.12.1 UG 1 148
127.0.0.1 127.0.0.1 UH 2 783 lo0

----------------------

Delete current default router:

# route delete default 10.67.12.1

Add new default router:

# route add default 10.67.13.1 1

if you're ssh'd in from afar, then combine the route delete and route add into one command.

# route delete default 10.67.12.1 ; route add default 10.67.13.1

as separate commands you loose connectivity after the delete, so you can't enter the route add.

also to make the default route come up on the next reboot, change the address in:
/etc/defaultrouter

Solaris Runlevels

     S     Single user state (useful for recovery)

0 Go into firmware.

1 Put the system in system administrator mode. All local
file systems are mounted. Only a small set of essen-
tial kernel processes are left running. This mode is
for administrative tasks such as installing optional
utility packages. All files are accessible and no
users are logged in on the system.

2 Put the system in multi-user mode. All multi-user
environment terminal processes and daemons are
spawned. This state is commonly referred to as the
multi-user state.

3 Extend multi-user mode by making local resources
available over the network.

4 Is available to be defined as an alternative multi-
user environment configuration. It is not necessary
for system operation and is usually not used.

5 Shut the machine down so that it is safe to remove the
power. Have the machine remove power, if possible.

6 Stop the operating system and reboot to the state
defined by the initdefault entry in /etc/inittab.