Sun Microsystems, Inc.
spacerspacer
spacer www.sun.com docs.sun.com |
spacer
black dot
 
 
21.  Maintaining Solaris Volume Manager (Tasks) Changing Solaris Volume Manager Defaults How to Increase the Number of Default Disk Sets  Previous   Contents   Next 
   
 

Example--md.conf File

Here is a sample md.conf file that is configured for five shared disk sets. The value of md_nsets is six, which results in five shared disk sets and the one local disk set.

#
#
#pragma ident   "@(#)md.conf    2.1     00/07/07 SMI"
#
# Copyright (c) 1992-1999 by Sun Microsystems, Inc.
# All rights reserved.
#
name="md" parent="pseudo" nmd=128 md_nsets=6;
# Begin MDD database info (do not edit)
...
# End MDD database info (do not edit)

Growing a File System

After a volume that contains a file system is expanded (more space is added), if that volume contains a UFS, you also need to "grow" the file system to recognize the added space. You must manually grow the file system with the growfs command. The growfs command expands the file system, even while mounted. However, write access to the file system is not possible while the growfs command is running.

An application, such as a database, that uses the raw device must have its own method to grow added space. Solaris Volume Manager does not provide this capability.

The growfs command will "write-lock" a mounted file system as it expands the file system. The length of time the file system is write-locked can be shortened by expanding the file system in stages. For instance, to expand a 1 Gbyte file system to 2 Gbytes, the file system can be grown in 16 Mbyte stages using the -s option to specify the total size of the new file system at each stage.

During the expansion, the file system is not available for write access because of write-lock. Write accesses are transparently suspended and are restarted when the growfs command unlocks the file system. Read accesses are not affected, though access times are not kept while the lock is in effect.

Background Information for Expanding Slices and Volumes


Note - Solaris Volume Manager volumes can be expanded, but not shrunk.


  • A volume, regardless if it is used for a file system, application, or database, can be expanded. So, you can expand RAID 0 (stripe and concatenation) volumes, RAID 1 (mirror) volumes, and RAID 5 volumes as well as soft partitions.

  • You can concatenate a volume that contains an existing file system while the file system is in use. Then, as long as the file system is UFS, it can be expanded (with the growfs command) to fill the larger space without interrupting read access to the data.

  • Once a file system is expanded, it cannot be shrunk, due to constraints in UFS.

  • Applications and databases that use the raw device must have their own method to "grow" the added space so that they can recognize it. Solaris Volume Manager does not provide this capability.

  • When a component is added to a RAID 5 volume, it becomes a concatenation to the device. The new component does not contain parity information. However, data on the new component is protected by the overall parity calculation that takes place for the volume.

  • You can expand a log device by adding additional components. You do not need to run the growfs command, as Solaris Volume Manager automatically recognizes the additional space on reboot.

  • Soft partitions can be expanded by adding space from the underlying volume or slice. All other volumes can be expanded by adding slices.

How to Grow a File System

  1. Check "Prerequisites for Creating Solaris Volume Manager Elements".

  2. Use the growfs command to grow a UFS on a logical volume.

    # growfs -M /mount-point /dev/md/rdsk/volumename

    See the following example and the growfs(1M) man page for more information.

Example--Growing a File System

# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10        69047   65426       0   100%    /home2
...
# growfs -M /home2 /dev/md/rdsk/d10
/dev/md/rdsk/d10:       295200 sectors in 240 cylinders of 15 tracks, 82 sectors
        144.1MB in 15 cyl groups (16 c/g, 9.61MB/g, 4608 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 19808, 39584, 59360, 79136, 98912, 118688, 138464, 158240, 178016, 197792,
 217568, 237344, 257120, 276896,
# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
...
/dev/md/dsk/d10       138703   65426   59407    53%    /home2
...

In this example, a new slice was added to a volume, d10, which contains the mounted file system /home2. The growfs command specifies the mount point with the -M option to be /home2, which is expanded onto the raw volume /dev/md/rdsk/d10. The file system will span the entire volume when the growfs command is complete. You can use the df -hk command before and after to verify the total disk capacity.


Note - For mirror and transactional volumes, always run the growfs command on the top-level volume, not a submirror or master device, even though space is added to the submirror or master device.


Overview of Replacing and Enabling Components in RAID 1 and RAID 5 Volumes

Solaris Volume Manager has the capability to replace and enable components within RAID 1 (mirror) and RAID 5 volumes.

In Solaris Volume Manager terms, replacing a component is a way to substitute an available component on the system for a selected component in a submirror or RAID 5 volume. You can think of this process as logical replacement, as opposed to physically replacing the component. (See "Replacing a Component With Another Available Component".)

Enabling a component means to "activate" or substitute a component with itself (that is, the component name is the same). See "Enabling a Component".


Note - When recovering from disk errors, scan /var/adm/messages to see what kind of errors occurred. If the errors are transitory and the disks themselves do not have problems, try enabling the failed components. You can also use the format command to test a disk.


Enabling a Component

You can enable a component when any of the following conditions exist:

  • Solaris Volume Manager could not access the physical drive. This problem might have occurred, for example, due to a power loss, or a loose drive cable. In this case, Solaris Volume Manager puts the components in the "Maintenance" state. You need to make sure that the drive is accessible (restore power, reattach cables, and so on), and then enable the components in the volumes.

  • You suspect that a physical drive is having transitory problems that are not disk-related. You might be able to fix a component in the "Maintenance" state by simply enabling it. If this does not fix the problem, then you need to either physically replace the disk drive and enable the component, or replace the component with another available component on the system.

    When you physically replace a drive, be sure to partition it like the old drive to ensure adequate space on each used component.


Note - Always check for state database replicas and hot spares on the drive being replaced. Any state database replica shown to be in error should be deleted before replacing the disk. Then after enabling the component, they should be re-created (at the same size). You should treat hot spares in the same manner.


Replacing a Component With Another Available Component

You use the metareplace command when you replace or swap an existing component with a different component that is available and not in use on the system.

You can use this command when any of the following conditions exist:

  • A disk drive has problems, and you do not have a replacement drive, but you do have available components elsewhere on the system.

    You might want to use this strategy if a replacement is absolutely necessary but you do not want to shut down the system.

  • You are seeing soft errors.

    Physical disks might report soft errors even though Solaris Volume Manager shows the mirror/submirror or RAID 5 volume in the "Okay" state. Replacing the component in question with another available component enables you to perform preventative maintenance and potentially prevent hard errors from occurring.

  • You want to do performance tuning.

    For example, by using the performance monitoring feature available from the Enhanced Storage tool within the Solaris Management Console, you see that a particular component in a RAID 5 volume is experiencing a high load average, even though it is in the "Okay" state. To balance the load on the volume, you can replace that component with a component from a disk that is less utilized. You can perform this type of replacement online without interrupting service to the volume.

 
 
 
  Previous   Contents   Next