01-28-2014 03:45 AM
We have a DL360 G7 that was unfortunately conigured with small 7200K rotation drives. In order to continue using it we need to upgrade the drives both in rotation speed and size. The four current drives are organized into two RAID1 arrays.
I believe we have three upgrade choices for this, but I wanted to confirm that to be the case.
Option one would be to create an artificial drive failure by shutting down, pulling a drive and bringing it back up. After recognizing the failure we could hot plug the new larger/faster drive and let the raid rebuild. Once the rebuild process completes we repeat with the second pair. Afterwards we could expand the arrays to use the capacity and we'd be done.
Option two is to make a backup image of the system, install and configure the new drives using the ROM utility, then restore the image onto the new drives.
Option three is to create a real drive failure by simply pulling on of the existing drives, then install the new one in its place and repeat for set 2. After rebuild repeat process for remaining drives.
I am curious about the relative merits of each method.. I am leaning towards Option two because it is the safest choice, but I wanted to get some feedback on several details.
First question: In all but option two we would have a RAID1 combining drives of different sizes/rotations/latency specs. Is such a RAID relaible and stable enough to run for a week (this is all off hours work) ??
Second question: How much risk is injected by using the much faster option 3 rather than option 1 ?
Obviously option 2 would generate a longer interruption than the others (due to the need to image out and back in the full contents of the system). It would however only mean one interruption rather than two. The main differences between one and three are downtime and risk from hot plugging drives.
Last question: since all of the equipmen is hot-plug capable how much risk is there to the old drives for yanking them and the new for plugging them (we intend to reuse the drives) ?
Thanks in advance for any help and guidance..
Solved! Go to Solution.
01-28-2014 04:35 AM
I have done Option 1 many times over the Years.
It is fast, you don't have to do many steps and you can revert back by inserting the removed Drive and pulling the new one.
I meam you will have a Way to go back.
Option 2 is also possible, but needs longer Offline time and additional Tools/Software.
Option 3 is possible, but the pulled Drive will not have a "Clean Operating System", for example Databases on this Drive wil be "dirty". You can not be sure to use this Drive, if you want to go back. It is as you pull out a USB Drive.
It will not damage the pulling Drive, but may be the Data on it going corrupt.
But before you start, have a Backup or Image anyway.
It is a good Idea to Update the SmartArray to the latest Firmware before you start.
Check, if your OS (you dont talk about) can extend the Boot Drive, if you want to extend it.
And yes, it is no problem to reuse this old Drives.
Simply go to the Array BIOS and delete and recreate the Raid on the new Server.
01-28-2014 09:07 AM
Thanks for your response. I have actually done Option 1 many times using the same exact same drive/same rotation speed. My only concern there is whether mixing the 7,200 K RPM existing drive with a new 10K RPM drive will lead to instability or exremely long/slow rebuild.
The system is on a Constant Replication backup system so it is always protected, but the recovery time is not the best. My question is partly about avoiding downtime but I was also worried about creating a very noticable slowdown twice with the Rebuilds. Since the 7,200K 500G drives would still be present until the last stage I assume they would affect the rebuild time making the already slow performance worse for quite a while...
Any idea how long a RAID1 rebuild at 7,200K RPM would require ?
(I'm wondering if it isn´t better to plan the longer interuption and use option 2 to avoid the rebuild)
What do you think ?
01-28-2014 10:36 AM
Mixing Drive Speed is working, I have several 10/15k Drive configurations.
You got sometimes 15k Drives as Spare if 10k are no longer available.
Before starting Rebuild you can set the "Rebuild Piority" to low.
If I renember it is not changable if the rebuild is already running.
As HP says: It depends
- Controller Type
- Size of Write Cache
may be 100GB/h? dont know
08-15-2014 08:54 AM - edited 08-15-2014 08:56 AM
I am in a similar situation using option 1. How did you expand the array to use the capacity of the larger disks?
08-15-2014 11:13 PM
I update arrays all the time with larger disks using the "replace one at a time" option.
If the drives are hot pluggable there's no reason to do it with the system turned off... that would mean shutting down the server each time you swap a disk, so it's not very "online upgradeable". If it's a system without hot plug drives I guess that's the only choice.
I just got done upgrading a DL360 G7 with 8 x 144GB drives to 8 x 300GB drives... it took a while. Replace a drive, let it rebuild. Replace the next, let it rebuild, etc. It took maybe an hour and a half to do each drive so I just spaced it out over a couple days rather than babysit it for that long all at once.
Once it's done, use the online expansion feature of the RAID controller. You do need a battery backed cache to do online expansions... I don't think it works with the zero-memory configurations.
I'm pretty sure I've been able to do this on all of the HP array controllers going back to at least the 5i/6i and for sure on the P4xx/P8xx line of controllers. It might have even worked on the SA 4xxx models or whatever going WAY back, but I don't remember ever doing it on those.
08-16-2014 04:18 AM
Thank you for your reply.
We have P4xx controllers on a G5, 6 and 7's. Can you please advise:
- How can I check if these controllers have a battery backed cache?
- How did you start the online expansion feature?
Thank you in advance.