05-30-2012 05:07 AM
05-30-2012 05:24 AM
I've been experimenting too and Session copy was a big problem. With File Library using software compression it was taking 3-4 hours. With StoreOnce it was taking 8-11 hours.
I had put that down to not using compression but maybe there's more.
05-30-2012 11:13 AM - edited 05-30-2012 11:17 AM
rehydrating (reading) from StoreOnce is expected to be slower than reading from a classic (non-DFMF) FileLibrary. The latter produces just large files as media containers, if your write parallelism isn't too bad, they will be allocated mostly contigous and thus read out with near sequential read throughput. Reading from SO is more of a random access pattern to a lot of small to medium sized files. That being said, I would not expect throughput to be as bad as you see it, unless the RAID backend's IOPS is really borderline: How many spindles of what RPM are there in the RAID5? Is the battery backup in good condition? What read throughput do you achieve on a small-file dominated file system on that RAID? What OS and choice of file system for the store? When you copy to a Null Device local to the gateway's media agent, is throughput still as bad? Is the RAID5 I/O-saturated when the data is reading? And last but certainly not least, is there enough available RAM in the box? SO really needs buffer cache to perform.
06-06-2012 12:19 PM
Just chiming in for future reference.
I have been testing with a D2D4324. Same issue/results. The rehydration for either a restore or copy job is very very slow..800GB expected to take 10hours.
Scheduling a POC with another vendor. We see if the results differ.
06-16-2012 07:11 PM - edited 06-16-2012 07:12 PM
There is a defect in the RMA that causes this issue. The object copy is actually really quick, but it passes the data multiple times to the destination device. A fix for DP 6.21 and 7.00 is available and not included in the latest GA patches. Works like a charm for me.
Copy from StoreOnce device to diff device type takes more space than original data
Assign a kudo to this post, if you find it useful.