Generic recommendations and best practices for SAN attached storage
- Volume group layout
- when possible, single volume per volume group; when not possible
concatenate all SATA or blade volumes in a volume group into a single
logical volume at the host; fewer volumes per volume group (disk)
is better
- all volumes in a SATA or blade volume group should be owned by
the same controller and the same host
- load balancing should be accomplished by using the different
I/O paths for each different volume group presented to the host
- use all the drive side I/O channels when laying out a volume
group; one drive per tray; alternate even and odd drives per loop
pair to use both channels; multiples of 4 drives per volume group
are easiest to configure to use the channels and work in both RAID
5 and RAID 1 configurations
- never configure plaid striping
- SAN/Host
- Micro zone each HBA (initiator) to a single disk controller
(target) making sure to connect to both targets
- do not put tape and disk on the same HBAs
- use the HBA drivers and firmware listed on the
interoperability matrix
- use the tested and supported HBA settings
- use the supported failover drivers
- Bladestore and SATA
- these drives are designed for large block streaming I/O such as
that found in D-D-T backup applications
- ensure adequate hot spares
(at least 1 spare per two trays)
- proactive monitoring and proactive blade data migration process
- 114C blade code + latest controller firmware
- Disk array controller configuration
- multi path connectivity to all hosts because controllers will
reboot when they encounter anomalies
- enable cache mirror to prevent data corruption during controller
reboots
- monitor the event log and proactively migrate drive data off
"suspect" drives
- Backup application settings
- Limit to 4 streams per array volume group
- use sequential storage pools on "cooked" filesystems rather than
raw or random I/O
- Performance
- to increase performance, look at block size on filesystems;
when using large block streaming I/O applications, try to match the
filesystem block size to the stripe size of the disk array for best
throughput performance; the largest filesystem block sizes are better
for streaming I/O
- RAID 5 is best for large block streaming I/O (not counting RAID 0)
- RAID 1 is best for random I/O (not counting RAID 0)
- Multiple concurrent streams of I/O into the same volume group
will look (and perform) like random I/O to the physical disk drives;
- If not using AVT failover on any attached hosts, disable AVT on
the entire array to increase cache usage
- Do not put tape and disk on the same HBAs
- Tape I/O is large streams and will congest the HBA. Disk I/O
tends to be smaller and more random in nature and requires faster
response times.
- When disk I/O is "slow" the host tends to try alternate paths,
which creates more performance issues due to moving the volume(s).
- Performance can be adversely effected by increasing the number of
targets per initiator in the same manner in which speeds on highways
are effected by increasing the number of on-ramps and exit ramps,
and therefore cars.
- Tape and disk subsystems usually require different HBA settings
for performance tuning. When both are used, the settings cannot be
ideal for both.
- Interoperability testing is seldom done with both subsystem
attached, so the interop matrix frequently has different firmware
levels, software settings, configuration settings, hardware
combinations, and operating system levels and settings.
Last update: 18 July 2006; Jeff Nieusma