Restoring the Virtual I/O Server



As there are 4 different ways to backup the Virtual I/O Server, so there are 4 ways to restore it.

Restoring from a tape or DVD

To restore the Virtual I/O Server from tape or DVD, follow these steps:

1. specify the Virtual I/O Server partition to boot from the tape or DVD by
using the bootlist command or by altering the bootlist in SMS menu.
2. insert the tape/DVD into the drive.
3. from the SMS menu, select to install from the tape/DVD drive.
4. follow the installation steps according to the system prompts

Restoring the Virtual I/O Server from a remote file system using a nim_resources.tar file

To restore the Virtual I/O Server from a nim_resources.tar image in a file system, perform the following steps:

1. run the installios command without any flag from the HMC command line.
a) Select the Managed System where you want to restore your Virtual I/O Server
from the objects of type "managed system" found by installios command.
b) Select the VIOS Partition where you want to restore your system from the
objects of type "virtual I/O server partition" found

c) Select the Profile from the objects of type "profile" found.
d) Enter the source of the installation images [/dev/cdrom]:
server:/exported_dir
e) Enter the client's intended IP address:

f) Enter the client's intended subnet mask:

g) Enter the client's gateway:

h) Enter the client's speed [100]:

i) Enter the client's duplex [full]:

j) Would you like to configure the client's network after the installation
[yes]/no?

2. when the restoration is finished, open a virtual terminal connection (for
example, using telnet) to the Virtual I/O Server that you restored. Some
additional user input might be required

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.

Restoring the Virtual I/O Server from a remote file system using a mksysb image

To restore the Virtual I/O Server from a mksysb image in a file system using NIM, complete the following tasks:

1. define the mksysb file as a NIM object, by running the nim command.
#nim -o define -t mksysb -a server=master –a
location=/export/ios_backup/filename.mksysb objectname
objectname is the name by which NIM registers and recognizes the mksysb
file.
2. define a SPOT resource for the mksysb file by running the nim command.
#nim -o define -t spot -a server=master -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
SPOTname is the name of the SPOT resource for the mksysb file.
3. install the Virtual I/O Server from the mksysb file using the smit command.
#smit nim_bosinst
The following entry fields must be filled:
“Installation type” => mksysb
“Mksysb” => the objectname chosen in step1
“Spot” => the SPOTname chosen in step2
4. start the Virtual I/O Server logical partition.
a) On the HMC, right-click the partition to open the menu.
b) Click Activate. The Activate Partition menu opens with a selection of
partition profiles. Be sure the correct profile is highlighted.
c) Select the Open a terminal window or console session check box to open a
virtual terminal (vterm) window.
d) Click (Advanced...) to open the advanced options menu.
e) For the Boot mode, select SMS.
f) Click OK to close the advanced options menu.
g) Click OK. A vterm window opens for the partition.
h) In the vterm window, select Setup Remote IPL (Initial Program Load).
i) Select the network adapter that will be used for the installation.
j) Select IP Parameters.
k) Enter the client IP address, server IP address, and gateway IP address.
Optionally, you can enter the subnet mask. After you have entered these
values, press Esc to return to the Network Parameters menu.
l) Select Ping Test to ensure that the network parameters are properly
configured. Press Esc twice to return to the Main Menu.
m) From the Main Menu, select Boot Options.
n) Select Install/Boot Device.
o) Select Network.
p) Select the network adapter whose remote IPL settings you previously
configured.
q) When prompted for Normal or Service mode, select Normal.
r) When asked if you want to exit, select Yes.

Integrated Virtualization Manager (IVM) Consideration

If your Virtual I/O Server is managed by the IVM, prior to backup of your system, you need to backup your partition profile data for the management partition and its clients as IVM is integrated with Virtual I/O Server, but the LPARs profile is not saved with the backupios command.

There are two ways to perform this backup:
From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Generate a backup

From the Virtual I/O Server CLI
1) Run the following command
#bkprofdata -o backup

Both these ways generate a file named profile.bak with the information about the LPARs configuration. While using the Web Interface, the default path for the file is /home/padmin. But if you perform the backup from CLI, the default path will be /var/adm/lpm. This path can be changed using the –l flag. Only ONE file can be present on the system, so each time the bkprofdata is issued or the Generate a Backup button is pressed, the file is overwritten.

To restore the LPARs profile you can use either the GUI or the CLI

From the IVM Web Interface
1) From the Service Management menu, click Backup/Restore
2) Select the Partition Configuration Backup/Restore tab
3) Click Restore Partition Configuration

From the Virtual I/O Server CLI
1) Run the following command
#rstprofdata –l 1 –f /home/padmin/profile.bak

It is not possible to restore a single partition profile. In order to restore LPARs profile, none of the LPARs profile included in the profile.bak must be defined in the IVM.

Backup of Virtual I/O Server



Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Reference: http://santosh-aix.blogspot.com

Backup of Virtual I/O Server


Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server

Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data

1. Extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data

2. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =

3. Fill the value of HDISKNAME with the name of the disk to which you want to restore to

4. Put back the modified bosinst.data in the nim_resources.tar image
#tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps

1. extract from the nim_resources.tar the bosinst.data
#tar -xvf nim_resources.tar ./bosinst.data
2. extract the mksysb from the nim_resources.tar
#tar -xvf nim_resources.tar ./5300-00_mksysb
3. extract the original bosinst.data
#restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
4. view the original target_disk_data
#grep -p target_disk_data ./var/adm/ras/bosinst.data

The above command displays something like the following:

target_disk_data:
PVID = 00c5951e63449cd9
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0
LOCATION = 0A-08-00-5,0
SIZE_MB = 140000
HDISKNAME = hdisk0

5. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one
6. add the modified file to the nim_resources.tar
#tar -uvf nim_resources.tar ./bosinst.data


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

Reference: http://santosh-aix.blogspot.com

Updating VIO server Patch level update



Applying updates from a local hard disk

To apply the updates from a directory on your local hard disk, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):

NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir

Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -dev

Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Create a directory on the Virtual I/O Server.
$ mkdir

Using ftp, transfer the update file(s) to the directory you created.
Apply the update by running the updateios command
$ updateios -accept -install -dev

Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel


NOTE:- If you are updating from an ioslevel prior to 1.3.0.1, the updateios command may indicate several failures (i.e. missing requisites) during fix pack installation. These messages are expected. Proceed with the update if you are prompted to "Continue with the installation [y/n]".

Applying updates from a remotely mounted file system

If the remote file system is to be mounted read-only, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command.
$ updateios -accept -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin.
Mount the remote directory onto the Virtual I/O Server.
$ mount remote_machine_name:directory /mnt
Apply the update by running the updateios command
$ updateios -accept -install -dev /mnt
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
Back to top
Applying updates from the CD/DVD driveThis fix pack can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow one of these two procedures, depending on your currently installed level of VIOS.

A. If the current level of the VIOS is earlier than V1.2.0.0 (V1.0 or V1.1):
NOTE:
If you are updating from VIOS level 1.1, you must update to the 10.1 level of the Fix Pack before updating to the 11.1 level of Fix Pack. In other words, if you are at level 1.1, updating to the 11.1 Fix Pack is a two-step process: First, update to version 10.1 Fix Pack, and then update to the 11.1 Fix Pack.

Contact your IBM Service Representative to obtain the VIOS 10.1 Fix Pack.

After you install the 10.1 Fix Pack, follow these steps to install the 11.1 Fix Pack.

Login to the VIOS as the user padmin.
Place the CD-ROM into the drive assigned to VIOS.
Apply the update by running the updateios command
$ updateios -accept -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel
B. If the current level of the VIOS is V1.2 through V1.5:

Login to the VIOS as the user padmin
Place the CD-ROM into the drive assigned to VIOS
Apply the update by running the following updateios command:
$ updateios -accept -install -dev /dev/cdX
where X is the device number 0-N assigned to VIOS
Verify that update was successful by checking results of the updateios command and running the ioslevel command. It should indicate the ioslevel is now V1.5.2.1-FP-11.1.
$ ioslevel

Reference: http://santosh-aix.blogspot.com

Expanding rootvg disk in VIO


Expanding rootvg disk in VIO environment where 2 VIO servers have been implemented for redundancy.

This article depicts the procedure for expanding a rootvg volume group for a POWER5 LPAR where two VIO Servers have been implemented for redundancy. It also assumes that the rootvg is mirrored across both VIO Servers. This procedure is not supported by IBM, but does work.
POWER5 LPAR:
• Begin by unmirroring your rootvg and remove hdisk1 from the rootvg volume group. If there are any swap or dump devices on this disk you may need to remove them first before you can remove hdisk1 from the rootvg volume group.

• Once the disk has been removed from the rootvg, remove it from the LPAR by executing the following:
#rmdev -l hdisk1 - d

• Now you execute the bosboot command and update your bootlist now that hdisk1 has been removed and is no longer part of the system:
#bosboot -a bootlist -o -m normal hdisk0

VIO Server (where hdisk1 was created):

• Remove the device from the VIO Server using the rmdev command:
#rmdev -dev <>

• Next you will need to access the AIX* OS part of the VIO Server by executing:
#oem_setup_env

• Now you have two options: you can extend the existing logical volume or create a new one if there is more than enough disk space left. In this example I will be using bckcnim_lv. smitty extendlv and add additonal LP's or smitty mklv

• Exit out of oem_setup_env by just typing "exit" at the OS prompt.

• Now that you are back within the restricted shell of the VIO Server, execute the following command. You can use whatever device name you wish. I used bckcnim_hdisk1 just for example purposes:
#mkvdev -vdev bckcnim_lv -vadapter <> -dev bckcnim_hdisk1

POWER5 LPAR:
• Execute cfgmgr to add the new hdisk1 back to LPAR:
#cfgmgr

• Add hdisk1 back to the rootvg volume group using the extendvg or smitty extendvg.

• Mirror rootvg using the mirrorvg command or smitty mirrorvg

• Sync the mirroring process to the background and wait to complete. This is very important and must complete before dealing with what represents the hdisk0 logical volume.

• Now you must execute bosboot again and update the bootlist again:
#bosboot -a
#bootlist -o -m normal hdisk0 hdisk1

Reference: http://santosh-aix.blogspot.com

Recovering a Failed VIO Disk



Recovering a Failed VIO Disk

Here is a recovery procedure for replacing a failed client disk on a Virtual IO
server. It assumes the client partitions have mirrored (virtual) disks. The
recovery involves both the VIO server and its client partitions. However,
it is non disruptive for the client partitions (no downtime), and may be
non disruptive on the VIO server (depending on disk configuration). This
procedure does not apply to Raid5 or SAN disk failures.

The test system had two VIO servers and an AIX client. The AIX client had two
virtual disks (one disk from each VIO server). The two virtual disks
were mirrored in the client using AIX's mirrorvg. (The procedure would be
the same on a single VIO server with two disks.)

The software levels were:

p520: Firmware SF230_145 VIO Version 1.2.0 Client: AIX 5.3 ML3

We had simulated the disk failure by removing the client LV on one VIO server. The
padmin commands to simulate the failure were:

#rmdev -dev vtscsi01 # The virtual scsi device for the LV (lsmap -all)
#rmlv -f aix_client_lv # Remove the client LV


This caused "hdisk1" on the AIX client to go "missing" ("lsvg -p rootvg"....The
"lspv" will not show disk failure...only the disk status at the last boot..)

The recovery steps included:

VIO Server

Fix the disk failure, and restore the VIOS operating system (if necessary)mklv -lv aix_client_lv rootvg 10G # recreate the client LV mkvdev -vdev aix_client_lv -vadapter vhost1 # connect the client LV to the appropriate vhost

AIX Client

# cfgmgr # discover the new virtual hdisk2
replacepv hdisk1 hdisk2
# rebuild the mirror copy on hdisk2
# bosboot -ad /dev/hdisk2 ( add boot image to hdisk2)
# bootlist -m normal hdisk0 hdisk2 ( add the new disk to the bootlist)

# rmdev -dl hdisk1 ( remove failed hdisk1)

The "replacepv" command assigns hdisk2 to the volume group, rebuilds the mirror, and
then removes hdisk1 from the volume group.

As always, be sure to test this procedure before using in production.

Reference: http://santosh-aix.blogspot.com

Configuring MPIO for the virtual AIX client



Virtual SCSI Server Adapter and Virtual Target Device.
The mkvdev command will error out if the same name for both is used.

$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.

The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.

Run the following command as padmin for checking the value of the attribute.
$
lsdev -dev hdiskpower# -attr reserve_lock

Run the following command as padmin for changing the value of the attribute.
$
chdev -dev hdiskpower# -attr reserve_lock=no

•Commands to change the Fibre Channel Adapter attributes And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail” and dyntrk to “yes”

$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm

The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.

The VIOS needs to be rebooted for fscsi# attributes to take effect.

Reference: http://santosh-aix.blogspot.com

VIO VLAN Setup




Reference: http://santosh-aix.blogspot.com

HA VIO server setup




Reference: http://santosh-aix.blogspot.com

VIO server Detail




Reference: http://santosh-aix.blogspot.com

VIO Server General setup




Reference: http://santosh-aix.blogspot.com

VIO Commands



VIO Server Commands


lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)

Create Shared Ethernet Adapter (SEA) on VIO Server


mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)

Create Virtual Storage Device Mapping


mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.


VIO command from HMC
#viosvrcmd -m
-p -c "lsmap -all

(this works only with IBM VIO Server)

VIO Server Installation & Configuration



IBM Virtual I/O Server
The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

Installation
You have two options to install the AIX-based VIO Server:
1. Install from CD
2. Install from network via an AIX NIM-Server

Installation method
#1 is probably the more frequently used method in a pure Linux environment as installation method #2 requires the presence of an AIX NIM (Network Installation Management) server. Both methods differ only in the initial boot step and are then the same. They both lead to the following installation screen:

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMElapsed time since release of system processors: 51910 mins 20 secs

------------------------------------------------------------------------------- Welcome to the Virtual I/O Server. boot image timestamp: 10:22 03/23 The current time and date: 17:23:47 08/10/2005 number of processors: 1 size of memory: 2048MB boot device: /pci@800000020000002/pci@2,3/ide@1/disk@0:\ppc\chrp\bootfile.exeSPLPAR info: entitled_capacity: 50 platcpus_active: 2This system is SMT enabled: smt_status: 00000007; smt_threads: 2 kernel size: 10481246; 32 bit kernel
-------------------------------------------------------------------------------




The next step then is to define the system console. After some time you should see the following screen:


******* Please define the System Console. *******Type a 1 and press Enter to use this terminal as the system console.


Then Choose language of installation


>>> 1 Type 1 and press Enter to have English during install.


This is the main installation menu of the AIX-based VIO-Server:



Welcome to Base Operating System
Installation and Maintenance
Type the number of your choice and press Enter. Choice is indicated by >>>.>>>

1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery

88 Help ? 99 Previous Menu

>>> Choice [1]:


Select Hard disk where you need to install VIO base operating system as we do in AIX Base operating system.


Once the installation is over. You will get login Prompt similar to AIX server.

VIO server is nothing but AIX on top of that Virtualisation software loaded on it. Generally on VIO server we do not host any application. Its basically used for sharing I/O resources ( DISK & Network ) to the client LPAR hosted in same Physical server.


Initial setup
After the reboot you are presented with the VIO-Server login prompt. You can't login as user root as you have to use the special user id padmin. No initial default password is set. Immediately after login you are forced to set a new password.


Before you can do anything you have to accept the I/O Server license.
This is done with the
license command

#license -accept

Once you are logged in as user
padmin you find yourself in arestricted Korn shell with only a limited set of commands. You can see all available commands with the command help. All these commands are shell aliases to a single SUID-binary called ioscliwhich is located in the directory /usr/ios/cli/bin. If you are familiar with AIX you will recognize most commands but most command line parameters differ from the AIX versions.
As there are no man pages available you can see all options for each command separately by issueing the command help
. Here is an example for the command lsmap:

$
help lsmap
Usage: lsmap {-vadapter ServerVirtualAdapter -plc PhysicalLocationCode
-all}
[-net] [-fmt delimiter]
Displays the mapping between physical and virtual devices.
-all Displays mapping for all the server virtual adapter
devices.
-vadapter Specifies the server virtual adapter device
by device name.
-plc Specifies the server virtual adapter device
by physical location code.
-net Specifies supplied device is a virtual server
Ethernet adapter.
-fmt Divides output by a user-specified delimiter.



A very important command is
oem_setup_env which gives you access to the regular AIX command line interface. This is provided solely for the installation of OEM device drivers


Virtual SCSI setup

To map a LV
# mkvg: creates the volume group, where a new LV will be created using the mklv command
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with the LV
# mkvdev: maps the virtual SCSI server adapter to the LV
# lsmap -all: shows the mapping information

To map a physical disk
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with a physical disk
# mkvdev: maps the virtual SCSI server adapter to a physical disk
# lsmap -all: shows the mapping information

Client partition commands

No commands needed, the Linux kernel is notified immediately

Create new volume group datavg with member disk hdisk1
# mkvg -vg datavg hdisk1

Create new logical volume vdisk0 in volume group
# mklv -lv vdisk0 datavg 10G

Maps the virtual SCSI server adapter to the logical volume
# mkvdev -vdev vdisk0 -vadapter vhost0

Display the mapping information
#lsmap -all

Virtual Ethernet setup

To list all virtual and physical adapters use the lsdev -type adapter command.

$ lsdev -type adapter

name status description
ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ide0 Available ATA/IDE Controller Device
sisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

Choose the virtual Ethernet adapter we want to map to the physical Ethernet adapter.

$ lsdev -virtualname status description
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

The command mkvdev maps a physical adapter to a virtual adapter, creates a layer 2 network bridge and defines the default virtual adapter with its default VLAN ID. It creates a new Ethernet interface, e.g., ent3.
Make sure the physical and virtual interfaces are unconfigured (down or detached).

Scenario A (one VIO server)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
ent3 Available
en3
et3

This has created a new shared ethernet adapter ent3 (you can verify that with the lsdev command). Now configure the TCP/IP settings for this new shared ethernet adapter (ent3). Please note that you have to specify the interface (en3) and not the adapter (ent3).

$ mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Scenario B (two VIO servers)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1


Configure the TCP/IP settings for the new shared ethernet adapter (ent3):

$mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Client partition commands
No new commands needed just the typical TCP/IP configuration is done on the virtual Ethernet interface that it is defined in the client partition profile on the HMC

Reference: http://santosh-aix.blogspot.com

Creating LPAR from command line from HMC



Create new LPAR using command line

mksyscfg -r lpar -m MACHINE -i name=LPARNAME, profile_name=normal, lpar_env=aixlinux, shared_proc_pool_util_auth=1,min_mem=512, desired_mem=2048, max_mem=4096, proc_mode=shared, min_proc_units=0.2, desired_proc_units=0.5,max_proc_units=2.0, min_procs=1, desired_procs=2, max_procs=2, sharing_mode=uncap, uncap_weight=128,boot_mode=norm, conn_monitoring=1, shared_proc_pool_util_auth=1

Note :- Use man mksyscfg command for all flag information.

Onother method of creating LPAR through configuration file we need to create more than one lPAR at same time

Here is an example for 2 LPARs, each definition starting at new line:

name=LPAR1,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/11/1,7/client/9/vio2a/11/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1
name=LPAR2,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/12/1,7/client/9/vio2a/12/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1

Copy this file to HMC and run:

mksyscfg -r lpar -m SERVERNAME -f /tmp/profiles.txt

where profiles.txt contains all LPAR informations as mentioned above.

To change setting of your Lpar use chsyscfg command as mentioned below.

Virtual scsi creation & Mapping Slots
#chsyscfg -m Server-9117-MMA-SNXXXXX -r prof -i 'name=server_name,lpar_id=xx,"virtual_scsi_adapters=301/client/4/vio01_server/301/0,303/client/4/vio02/303/0,305/client/4/vio01_server/305/0,307/client/4/vio02_server/307/0"'

IN Above mentioned command we are creating Virtual scsi adapter for client LPAR & doing Slot mapping with VIO servers. In above scenario there is two VIO servers for redundancy.

Slot Mapping

Vio01_server ( VSCSI server slot) Client ( Vscsi client Slot)
Slot 301 Slot 301
Slot 303 Slot 303

VIO02_server (VSCSI sever Slot) Client ( VSCSI client Slot)
Slot 305 Slot 305
Slot 307 Slot 307

These Slot are mapped in such a way if Any disk or logical volume are mapped to Virtuals scsi adapter through VIO command "mkvdev".

Syntax for Virtual scsi adapter


virtual-slot-number/client-or-server/supports-HMC/remote-lpar-ID/remote-lpar-name/remote-slot-number/is-required

As in command above mentioned command mksyscfg
"virtual_scsi_adapters=301/client/4/vio01_server/301/0"

means

301 - virtual-slot-number
client-or-server - client (Aix_client)
4 -- Partiotion Id ov VIO_01 server (remote-lpar-ID)
vio01_server - remote-lpar-name
301 -- remote-slot-number (VIO server_slot means virtual server scsi slot)
1 -- Required slot in LPAR ( It cannot be removed from DLPAR operations )
0 --means desired ( it can be removed by DLPAR operations)


To add Virtual ethernet adapter & slot mapping for above created profile

#chsyscfg -m Server-9117-MMA-SNxxxxx -r prof -i 'name=server_name,lpar_id=xx,"virtual_eth_adapters=596/1/596//0/1,506/1/506//0/1,"'

Syntax for Virtual ethernet adapter


slot_number/is_ieee/port_vlan_id/"additional_vlan_id,additional_vlan_id"/is_trunk(number=priority)/is_required

means

So the adapter with this setting 596/1/596//0/1 would say it is in
slot_number 596, Its is ieee, the port_vlan_id is 1, it has no VLAN id assigned, It is not a trunk adapter and it is required.

Reference: http://santosh-aix.blogspot.com

Listing LPAR information from HMC command line interface



To list managed system (CEC) managed by HMC

#
lssyscfg -r sys -F name

To list number of LPAR defined on the Managed system (CEC)

# lssyscfg -m SYSTEM(CEC) -r lpar -F name,lpar_id,state

To list LPAR created in your system use lsyscfg command as mentioned below.

# lssyscfg -r prof -m SYSTEM(CEC) --filter "lpar_ids=X, profiles_names=normal"

Flags

m-> Managed System name
lpar_ids -> Lpar ID (numeric Id for each LPAR created in the Managed system (CEC)
profile_name -> To choose profile of LPAR

To start console of LPAR from HMC

# mkvterm -m SYSTEM(CEC) --id X

m- > managed system (ex -p5-570_xyz)
id - > LPAR ID

To finish a VTERM, simply press ~ followed by a dot .!

To disconnect console of LPAR from HMC

# rmvterm -m SYSTEM(CEC) --id x

To access LPAR console for diffrent Managed system from HMC

#vtmenu


Activating Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxx -r lpar -F name,lpar_id,state,default_profile VIOS1.3-FP8.0,1,Running,default linux_test,2,Not Activated,client_default hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar -o on -b norm --id 2 -f client_default

The above example would boot the partition in normal mode. To boot it into SMS menu use -b sms and to boot it to the OpenFirmware prompt use -b of.

To restart a partition the
chsysstate command would look like this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed --restart

And to turn it off - if anything else fails - use this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id,state
VIOS1.3-FP8.0,1,Running
linux_test,2,Shutting Down

Deleting Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1
linux_test,2
hscroot@hmc-570:~> rmsyscfg -m Server-9110-510-SN100xxxx -r lpar --id 2
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1

 
|  AIX Advanced Interactive eXecutive. Blogger Template By Lawnydesignz Powered by Blogger