[Solved] Unable to create HFS+ and APFS volumes on same disks
SoftRaid 6.2.1, MacOS 12.3, Thunderbay 4 4x8TB Seagate Ironwolf HDDs, OWC thunderbolt dock, 16" M1Max MBP
I am trying to set up a new disk array to replace a Mercury Elite Pro Quad with older smaller drives. I want to configure the new drives as Raid 1+0 partitioned as :
6TB : HFS+ (Digital photography)
2TB : HFS+ (VMs)
4TB : APFS (Time machine backups)
4 TB : HFS+ (long term archives)
Creating the first volume HFS+ works fine. It is created, mounted and available in the Finder. Creating the second HFS+ volume works too, but with an anomaly. SoftRaid now reports the first volume as unmounted, even though it is still available in the Finder and I can create files on it. Creating the APFS volume fails, saying disks are in use :
Attempting to create another HFS+ volume also fails with the same error. Disk Utility is not running, but I do have the OWC Dock Ejector running. How do I go about creating these volumes?
Apr 06 1014 - SoftRAID Application: The volume create command for the volume "Archive" failed because this disk is being used by another application (error number = 103).
Attach a SoftRAID tech support file (utilities menu) I can take a look at what you have. Should be easy to do this.
I couldn't find an easy way to close the dock ejector nor to uninstall it, just in case that is what is tying up the disks.
Its not Dock Ejector, it is just a script that queries all devices on the OWC dock and sends an eject, essentially.
Can you manually unmount the prior two volumes?
Usually it is Time Machine (APFS) volumes giving the headache, as Time Machine does not want to release its volumes, HFS are rarely an issue.
If you cannot manually unmount the volumes for whatever reason you can force eject them using terminal, I can send that command.
@softraid-support I ejected all volumes across both enclosures (using Finder), then was able to create the new volumes. When I created the APFS volume it re-mounted the HFS+ volumes on those disks first, then created the APFS volume successfully. I then ejected everything again, and created the last HFS+ volume. Again, it re-mounted everything first then created the volume successfully. Display bug : it shows all the other volumes as unmounted, even though they are mounted and accessible. Restarting SoftRaid resolves this.
I saw in the support FAQs that softraid will only create one APFS container per raid set. How do I go about converting that 4TB APFS volume into 2x 2TB volumes within the container that was created? Do I go into diskutil, delete the TM volume, then recreate 2 new APFS volumes within the APFS container it shows?
Also, why does the thunderbay drives appear as the raw drive models, rather than enclosure identifiers like the mercuery elite does?
You want "soft" volumes? We do not have the ability to support this yet, we need more APFS documentation to accomplish this.
The mount status is a minor bug, we will fix it when we have better APFS support added to SoftRAID.
the Mercury Elite ABCD slots have hardware identifiers, so they show in Disk Utility. Thunderbays identify all disks identically, regardless of slots.
@softraid-support Thank you for your help. While I am able to create the APFS volumes now, I can't use them because I am getting kernel panics writing to them (I see this is already a known issue). Copying 300+GB to one of the HFS+ volumes worked though. I'll just have to keep using the Mercury Elite for TM backups for now.
@softraid-support I recreated the APFS volume as Raid 1+0, workstation optimized, 64KB stripe size (I am sure this was what it was before). I was able to copy a 270GB tarball over and then extract it without any issue.
Once I tried to use it for Time Machine (allowing it to reformat and encrypt), a kernel panic happened as soon as it tried to write to the drive, as before.
I have deleted it again, recreated as Raid 5 across all 4 disks, 64KB stripe (even though it recommended 16) and Time Machine is working, albeit horrendously slow. It was considerably faster writing an initial backup to the a single disk in the mercury elite.
At this point it seems that I need to go back to using a pair of separate disks in the mercury elite for Time Machine, and using the thunderbay for everything else.
Time Machine seems to me to get worse with each iteration.
the 64k Stripe unit size is needed for RAID 5 on M1 to avoid a kernel panic (its the default for R10), but it should perform similarly to 16k. Something else was likely going on, but I have also seen Time Macihine take days to backup a modest volume (a single SSD disk, not a RAID volume).
@softraid-support Just to follow up on this with 6.3 installed (and Monterey 12.4). RAID 1+0 is still a no-go with Time Machine - immediate kernel panic on write - but RAID 5 with 16KB stripe size now works (and is considerably faster than 64KB stripe size).
I created 2 RAID 5 volumes in the remaining space, one with 16KB stripe size, one with 64KB stripe size. The first one took about 40 minutes to do a 400GB+ initial backup, and the second one finally estimated 2+ hours> I recreated the second volume with 16KB stripe size, and the estimate went down to about an hour. This is probably due to this volume being the last one and in the slowest area of the disks.
I'll keep these volumes running in conjunction with the other two in the mercury elite and see if any issues arise.