|
|
RAID Software Suite Installation for Linux
*The I2O kernel modules were developed as an Open Source project. These modules were developed for two kernel versions: 2.2.x (current stable versions) and 2.3.x (development version leading to 2.4 version). The modules can be obtained along with the latest kernel sources from ftp://ftp.kernel.org/pub/linux/kernel/ . We suggest you obtain the very latest modules. This CD-ROM contains the 2.2.x version of the I2O modules. Install the RAID Software Suite onto an Existing Linux Server
1. If an X Windows is not available, then run the install script: $/mnt/cdrom/install
2.
Note:
3.
a. If you wish to install source code later than Linux 2.2.15, select No. The screen in Bypass Installation of Kernel Source Code appears. The source code on the CD-ROM is not installed and the remaining components of the software suite are installed. Install the Kernel Sources
Install Kernel Sources from Install Program
Bypass Installation of Kernel Source Code
4.
5.
6.
7. Rebuild the Kernel with I2O Support The Linux RAID Software Suite installation program will install the latest kernel source with I2O support in the /usr/src/linux directory. You must rebuild the kernel with I2O support. I2O support can be either statically linked with the kernel or it can be built as kernel modules. The kernel can be built with many options and should be customized for your particular machine.
1. $ make xconfig The screen in Linux Kernal Configuration appears. Linux Kernal Configuration
2.
a.
Configuring I2O Support as Kernel Modules
3. $ make dep Additional reading for configuring and rebuilding the Linux kernel: Linux Kernel Configuration HOWTO at http://www.linuxdoc.org/HOWTO/Kernel-HOWTO.html
4. image=/boot/vmlinuz-2.2.12-20 image=/boot/vmlinuz-2.x.x-label
5.
6.
7. If you built the I2O support statically into the kernel, then the RAID controller card will be automatically detected at bootup. If you built the kernel with I2O support as kernel modules, you must load the kernel modules.
1. $ modprobe i2o_config
2. Type the following on the command line: $ lsmod Module Size Used by For more information on Linux Kernel modules read /usr/src/linux/Documentation/modules.txt
The RAID Configuration Services allow you to configure your RAID volumes, disks and arrays. You must start RAID Configuration Services before configuring your RAID volume. Type the following at the command line: $ /opt/iir/bin/iird start | stop | restart Access the RAID Configuration Services Administration through the shortcut located off of the system branch of your GNOME/KDE start menu. This program may be called from any of the init scripts at boot-up and shutdown time. However, the RAID Configuration Services are only required to run when you are configuring the RAID subsystem with RAID Storage Console. Once the RAID subsystem configuration is completed, you may wish to stop RAID Configuration Services to free system resources. RAID Configuration Services Administration The RAID Configuration Services Administration utility is used to configure parameters of the RAID Configuration Services. This X Windows program provides the ability to manage user access, TCP/IP port settings, and remote access of the RAID Configuration Services. An alternative to using the RAID Configuration Services Administration utility is to manually edit the /opt/iir/iirserver/etc/config.iir text file and restart the RAID Configuration Services. The RAID Storage Console is an HTML-based configuration utility that is used to configure the RAID subsystem. The RAID Storage Console can be accessed locally or remotely. LocalAccess the local RAID Storage Console through the shortcut located off of the system branch of your GNOME/KDE start menu. RemoteThe RAID Storage Console can also be accessed remotely. If you want to access it remotely, use a web-browser and point it to: The RAID storage console is under restricted access. For authorized access, use the following initial username/password pair: root/root
Warning:
See the RAID Controller User's Manual for a complete description of the RAID Storage Console utility and its usage. IRVIEW is a command line utility that provides real-time status of the RAID subsystem. Using RAID Configuration Services Administration
1. Access the RAID Configuration Services Administration through the shortcut located off of the system branch of your GNOME/KDE start menu. The screen in RAID Configuration Services Administration appears.
2.
The following parameters are available: CGI File: This parameter indicates the interface to the I/O processor and should not be changed. PORT: This is the TCP/IP port used to access the CGI file. Default value is 960. Access type: This parameter indicates whether or not RAID Storage Console can be accessed from a remote server. The default value of LOCAL. For remote access, change the value to REMOTE. After any change to the RAID Configuration Server, restart the server with the following command: /opt/iir/bin/iird restart
3.
To add a new user, enter a new user name in the text field and click the button labeled Add. Enter a new password at the prompt. See User Management Add User Screen.
To change the password of an existing user, enter an existing username and click on Change. Complete the password prompts as directed. See User Management Change Password.
User Management changes do not require a restart of the RAID Configuration Services to take effect. Configure a RAID Volume in a Linux System
Warning:
1. Use fdisk to create partitions on the RAID volumes using the following command: $ fdisk /dev/i2o/hd [a : first RAID volume - p : sixteenth volume created]
Note:
2. $ mke2fs /dev/i2o/hd [a-p][1-15: the partition number]
Note:
3. $ mount /dev/i2o/hd [a-f][1-15] /mnt/i2o Check the RAID volume as a mounted directory with the following command: The following output appears:
Monitor the Status of the RAID Subsystem Monitor the status of the RAID subsystem by using the command line utility called irview. To invoke this utility, type /usr/local/bin/irview on the command line. To get more information on irview, invoke the corresponding man page (man irview). A sample irview screen is shown in irview Screen. irview Screen
Uninstalling RAID Software Suite To uninstall the RAID Software Suite run the following:
Note:
Booting to Red Hat Linux on an I2O RAID Volume There are two methods available for booting Red Hat Linux from an I2O RAID Controller. The first method installs Red Hat Linux on an IDE or SCSI hard drive and loads LILO (the Linux Loader) from the Master Boot Record (MBR) on that original drive (/dev/hda), which then loads the Linux kernel which is located on a disk or volume connected to the I2O RAID Controller. The second method installs LILO onto the MBR of the I2O boot device (/dev/i2o/hda) and allows the system to boot directly from a disk or volume connected to the I2O RAID Controller.
Warning:
Method 1 assumes that you have installed Red Hat Linux 6.x on the primary IDE drive (/dev/hda). Although an IDE drive is assumed for these instructions, Method 1 could be used when Red Hat Linux is installed on a SCSI hard drive. For example, the boot device file might change depending on the installation. Once you have completed the Red Hat installation, install the I2O kernel as described in Rebuild the Kernel with I2O Support. After the RAID Software Suite installation is completed and the system is rebooted with I2O support, follow the steps below to boot your system from the I2O drive.
1.
2.
3.
4. $ dd if=/dev/hda1 of=/dev/i2o/hda1
5. $ mount /dev/i2o/hda1 /mnt/i2o -t ext2
6. image=/mnt/i2o/boot/vmlinuz-i2o-version # Put your i2o kernel version here
7. /dev/i2o/hda1 / ext2 defaults 1 1 The above line will cause the kernel to mount /dev/i2o/hda1 as the root file system.
8. $ /sbin/lilo-i2o Method 2 - MBR on I2O RAID Volume Method 2 assumes that you have installed Red Hat Linux 6.x on the primary IDE drive (/dev/hda). Copy the Linux kernal, LILO, and the MBR to a RAID volume that is connected to the I2O RAID controller. To boot from the RAID volume, a modified version of the LILO binary is required. This modified version ignores the existence of IDE drives on the system.
Note:
1.
2.
3.
4. $ dd if=/dev/hda1 of=/dev/i2o/hda1
5. $ mount /dev/i2o/hda1 /mnt/i2o -t ext2
6. boot=/dev/i2o/hda
7. /dev/i2o/hda1 / ext2 defaults 1 1 The above line will cause the kernel to mount /dev/i2o/hda1 as the root file system.
8. /dev/hda5 swap swap defaults 0 0 /dev/i2o/hda2 swap swap defaults 0 0
9. $ /sbin/lilo-i2o-hack -C .mnt/i2o/etc/lilo.conf $ shutdown -h now
11.
12. boot=/dev/i2o/hda In the future, if you edit lilo.conf, run LILO w/I2O support (lilo-i2o) instead of the modified LILO (i.e lilo-i2o-hack). Linux Dynamic Block Device Limitations The RAID volumes created using the I2O Linux drivers are seen by Linux as "Block Devices". The RAID Software Suite gives a powerful new feature to create new block devices while a Linux system is up and running via the use of the Storage Console. The new RAID volume created is detected as /dev/i2o/hd* and can be immediately used as a normal block storage device. This block device was dynamically created while the system was up and running and is associated to a particular device file (/dev/i2o/hd*) and a Major/Minor number, however, Linux does not associate a physical storage device with the device file (or rather the Major/Minor) number across reboots. The implication of this limitation is that when a reboot occurs after a user has dynamically created a dynamic block device, the device files associated with the physical storage device changes once the system comes back online. If there were entries in the /etc/fstab and /etc/mtab associated with these dynamc block devices, then they will no longer be valid and the system will either incorrectly mount these devices or it may not mount them at all. Your system may not boot if the root file system was also on a dynamic block device. Workarounds
|
|||||||||||||||||||||||||||||||||||