04-03-2011 01:45 AM
I am planning a migration between two storage systems.
Emc clarion to IBM N Series
I m Using HP UX servers
Now i Hav one path for each storage machine
the easiest ways i think is to create a mirror between the volume and then break the mirror and add the 2 nd N series path
The problem is that im not familiar with HP UX Systems for now i have onlu the two volume visible...could you please help and indicate me a pdf or the command to do it?
Solved! Go to Solution.
04-04-2011 06:20 AM
That can be the easiest way. Be aware that NetApp and CX volume size calculations are different (NetApp uses 1024xMB = GB, CX uses 1000xMB = GB). As long as your LUNs are the same size or slightly larger, you shouldn't see any issues.
If you have a MirrorDisk/UX license, go for it. If not, you can also use pvmove. The only concern with pvmove is that at the write commits, if there were a host failure ast that exact moment, you could have data loss.
Identify the new LUNs (ioscan -fnC disk) find your LUNs. You can determine what they are part of (or not part of (with pvdisplay -v /dev/dsk/ctd.. | more)
pvcreate the new LUN (pvcraete /dev/rdsk/ctd).
vgextend /dev/vgXX /dev/dsk/ctd, where XX is the VG you want to extend to and ctd is the path of the new disk.
lvextend -m 1 /dev/vgXX/lvolY /dev/dsk/ctd, where XX is your VG and Y is the name of your lvol to mirror, ctd is your NetApp disk.
You can monitor the mirroring with lvdisplay -v, note that one column has blocks, the other doesn't, as it completes mirroring, the other column will fill up.
Once done, lvreduce -m 0 /dev/vgXX/lovlY /dev/dsk/ctd - where ctd is the CX disk.
Couple of things to note. First, You should use more than one path to your arrays/LUNs, preferably on redundant fabrics.
Second, those paths should be on different HBAs and you should be using single initiator - single target zoning on your switches. I mean two things here, first, you shouldn't share CX and NetApp LUNs on the same HBA, second, you should be using redundant HBAs to your multiple fabrics.
Third, watch your snap reserve as you are doing the expansions.
You will also be unable to assess any performance differences until the mirror is broken.
The biggest thing about this, is that it will be time consuming, but it will be online.
04-04-2011 08:41 AM
Actually on each server we have 3 HBA
2 dedicated to Disks
1 to tape
All path are on cisco san switches
I Have planed to remove one path from the EVA storage
and forward it to N series on IBM 2498 switches
so it will be another san fabric.
It is impossible to have more than one path for each storage machine...