Subject: 4 disks used as RAID-5 vs. 8 disks used as RAID-5
So I've tested using 4 disks in two of my LaCie 2big 4TB units on a single Thunderbolt 1 bus using SoftRAID to configure them as RAID-5. Using the AJA System Test to measure i/o rates I got...
Writes: 314 MB/sec
Reads: 422 MB/sec
If I were to repeat this testing again using 8 disks in four LaCie 2big 4TB units on a single Thunderbolt 1 bus using SoftRAID should I expect to see close to 2x the above numbers for Writes/Reads ?
Those results are slow, unless this is a used/partially filled volume. You should be closer to 500MB/s.
(assuming 16GB File size, 4K frame rate, and unchecking the disable file system cache box)
If you created an 8 drive, yes you would get about double the performance.
Yes, AJA's cache was disabled so I could obtain direct i/o rates without buffering in the kernel buffer cache.... that's cheating IMO.
I was using 4GB file sizes with AJA.
Yes.... the 4-disk RAID-5 was about 70% full.
Thanks for your reply.
As an aside I compared my Promise Pegasus2 R6 in RAID-5 using AJA the same way as above and it gave
Writes: 479 MB/sec
Reads: 536 MB/sec
Toshiba 2TB disks in the Pegasus and it was around 35% full.
That is reasonable. Volumes get quite significantly slower they they fill up. 70% full explains your prior result. When you pass 80% full, throughput starts to drop dramatically.
OK... So I'm going to setup this RAID-5 on 4 individual LaCie 2big 4TB enclosures. Thus the RAID-5 would be spread over 8 disks. The LaCie 2big units are to be connected to a MP6,1 that has 6 Thunderbolt-2 ports with pairs on 3 Thunderbolt buses. That is, each pair of Thunderbolt 2 ports share a single bus.
My question is, will optimum performance be achieved if I connected 3 of the 2bigs to separate Thunderbolt bus ports with the 4th connected to any of the three ports ?
To conserve Thunderbolt ports I could simply daisy chain all 4 of the 2bigs off a single Thunderbolt port, but wonder if this configuration would give me less performance vs. the one above ?
Please advise. Thank you.
Each TB bus can do up to 1.3 to 1.4GB/s. So two buses would probably handle the throughput of these drives.
I submitted a separate posting about a problem I've encountered with having 4 RAID-5 Volumes (3.5TB each in size, named BIG-1, BIG-2, BIG-3 and BIG-4) spread across the 8 disks in the 4 LaCie 2Big 4TB units. Note that each LaCie 2Big has two disks of 2TB capacity.
The BIG-1, BIG-2 and BIG-4 are operational but the BIG-3 is degraded on account of one of the 8 disks having errors. I have BIG-3 unmounted to stop SoftRAID 5.5 from issuing errors about BIG-3.
Can I simply pull the failing disk, and replace it with a new 2TB disk that is formatted/partitioned into four 500GB chunks and have BIG-3 rebuilt ?
I assume I will also need to rebuild BIG-1, BIG-2 and BIG-4, right ?
Thank you for your attention and advice about this problem.
As an aside I do have full backups of BIG-1/2/3/4 that get created each day in early morning hours. So if the worst scenario arises where I simply replace the failed disk and things don't rebuild successfully, I can remake/initialize BIG-1/2/3/4 as fresh new RAID-5 Volumes and restore their data from their backups.... but obviously I'd like to avoid having to do that.
You are on the right track, see the other response.
What you do is initialize the new disk with SoftRAID and "add disk" to each volume.
Do not try to create volumes on D2, just leave it in the fresh initialized state from SoftRAID and "add disk" to each volume.
OK, thank you.
I'll post my results for this recovery. It will be Thursday at earliest as I'm busy today with other things.
Using SoftRAID 5.5.5
OK.... So before attempting the recovery of this failed disk for BIG-1/2/3/4 RAID-5 Volumes I decided to test the recovery procedure using my OWC ThunderBay 4 mini (TB4m) that had 4x 240GB OWC SSDs inside; named D1, D2, D3 and D4.
1) I created four RAID-5 Volumes named TB-1, TB-2, TB-3 and TB-4 using D1/2/3/4. Their sizes were 180GB, 180GB, 179GB and 179GB respectively.
2) I now slid the D2/3/4 SSDs out from the TB4m enclosure, leaving D1 in place.
3) I now Disabled Safeguard on the TB-1/2/3/4 Volumes and selected Initialize on the D1 SSD.
4) I now slid back the D2/3/4 SSDs into the TB4m enclosure.
5) SoftRAID reported "degraded/missing disk" for the TB-1/2/3/4 RAID-5 Volumes (as expected)
6) I now selected each TB-1/2/3/4 in turn to "Add Disk..." from the initialized D1 SSD.
7) All went well until I got to the "Add Disk..." for TB-4. SoftRAID issued error stating - "Insufficient space...".
8) Hmmmm, this result was puzzling and disturbing.
9) So I now waited for the auto rebuilds to complete on TB-1, TB-2 and TB-3.
10) Now I tried again to "Add Disk..." for TB-4 and this time it succeeded.
Thus, it seems this procedure for recovering my BIG-1/2/3/4 RAID-5 Volumes must be done as described above by not "Add Disk..." for the BIG-4 until the BIG-1/2/3 have been rebuilt.
Is this a bug (see item 7 above) or is this expected but not documented in the SoftRAID documentation for recovering a failed disk in a RAID-5 configuration ?
Was my pre-test for recovery using my TB4m accurately carried out in SoftRAID's eyes ?
Thank you.
I think you did this correctly. I will test your scenario in our labs to make sure we do not have a bug in this "add disk" part of the application. thanks for describing this well enough that I can test it.
OK... So today I'm attempting the recovery. Here what I've done so far. Note, I'm using SoftRAID 5.5 at this time on my MP6,1 running El Capitan 10.11.6.
1) I replaced disk D2 and Initialized it with SoftRAID
2) I now performed "Add Disk..." for BIG-1 and selected to add from the new disk D2
3) The BIG-1 now started its rebuild
RAID 5 - degraded - safeguard enabled
no errors - rebuilding
current offset: 1,752,307,897,592 - time remaining: 00:47:55
4) The current offset is slowly incrementing and the time remaining decrements to zero and then becomes no zero and decrements again. The time cycles up then down continuously while the vertical blue progress bar on the disk icon with the "5" in the middle in slowly rising. It's at the 50% mark at this time.
Question: Will this rebuild stop/complete when the current offset number reaches the BIG-1's total bytes of 3,499,999,664 ? This would match the 50% progress bar at this time.
While BIG-1 is being rebuilt I find I cannot proceed with "Add Disk..." for BIG-2. It simply reports
Unable to add a disk to this volume
You cannot add a disk to the volume "BIG-2" because it contains
one or more disks which are out of sync. You should backup all
the data on this disk, delete the volume and recreate it.
This is kind of scary.
Will I ba able to "Add Disk..." for BIG-2 once BIG-1 has been rebuilt ? ...and of course same for BIG-3 and BIG-4 ?
If you have two disks out of sync, then no, it means SoftRAID cannot guarantee what is on all disks. You can tell by how many yellow lines are there when you click on each volume.
Back up the volume which is out of sync (D2) or any other volume like that, so you can either delete/recreate, or erase and add the disk.
One thing I did not mention, as I was assuming all was OK (Its harder to do this without tech support files) is making sure the same disk is out of sync for each volume, if you had events going on in your system.
Probably you are OK, though, but you may need to backup restore that volume.
The D1 disk is reporting
no error - data and parity disk - out-of-sync
Volumes BIG-2, BIG-3 and BIG-4 all show a single yellow line to the disk D1.
I have full current backups for Volumes BIG-1 and BIG-2. I'm also make full backups for Volumes BIG-3 and BIG-4. The BIG-3 is needing some 400 GB of new data to be copied over and is almost completed. The backup for BIG-4 should go quickly.
If the "Add Disk..." continues to be a no-go I shall delete Volumes BIG-2/3/4, remake them and then restore them from their backups.
I still do not understand why I cannot "Add Disk..." for BIG2/3/4.
I have created a Tech Report file. Who do I send it to ?

