Unable to create HF...
 
Notifications
Clear all

[Solved] Unable to create HFS+ and APFS volumes on same disks

(@krustyfur)
Active Member Customer

SoftRaid 6.2.1, MacOS 12.3, Thunderbay 4 4x8TB Seagate Ironwolf HDDs, OWC thunderbolt dock, 16" M1Max MBP

 

I am trying to set up a new disk array to replace a Mercury Elite Pro Quad with older smaller drives. I want to configure the new drives as Raid 1+0 partitioned as :

6TB : HFS+ (Digital photography) 

2TB : HFS+ (VMs)

4TB : APFS (Time machine backups)

4 TB : HFS+ (long term archives)

 

Creating the first volume HFS+ works fine. It is created, mounted and available in the Finder. Creating the second HFS+ volume works too, but with an anomaly. SoftRaid now reports the first volume as unmounted, even though it is still available in the Finder and I can create files on it. Creating the APFS volume fails, saying disks are in use :

Screen Shot 2022 04 06 at 10.05.32

 

Attempting to create another HFS+ volume also fails with the same error. Disk Utility is not running, but I do have the OWC Dock Ejector running. How do I go about creating these volumes? 

 

Chris

 

Apr 06 1014 - SoftRAID Application: The volume create command for the volume "Archive" failed because this disk is being used by another application (error number = 103).

This topic was modified 2 months ago by SoftRAID Support
Quote
Topic starter Posted : 06/04/2022 9:30 am
(@softraid-support)
Member Admin

Attach a SoftRAID tech support file (utilities menu) I can take a look at what you have. Should be easy to do this.

ReplyQuote
Posted : 06/04/2022 11:11 am
(@krustyfur)
Active Member Customer

 

I couldn't find an easy way to close the dock ejector nor to uninstall it, just in case that is what is tying up the disks.

 

Thanks. 

ReplyQuote
Topic starter Posted : 06/04/2022 11:26 am
(@softraid-support)
Member Admin

@krustyfur 

Its not Dock Ejector, it is just a script that queries all devices on the OWC dock and sends an eject, essentially.

 

Can you manually unmount the prior two volumes?

Usually it is Time Machine (APFS) volumes giving the headache, as Time Machine does not want to release its volumes, HFS are rarely an issue.

If you cannot manually unmount the volumes for whatever reason you can force eject them using terminal, I can send that command.

ReplyQuote
Posted : 06/04/2022 12:41 pm
(@krustyfur)
Active Member Customer

@softraid-support I ejected all volumes across both enclosures (using Finder), then was able to create the new volumes. When I created the APFS volume it re-mounted the HFS+ volumes on those disks first, then created the APFS volume successfully. I then ejected everything again, and created the last HFS+ volume. Again, it re-mounted everything first then created the volume successfully. Display bug : it shows all the other volumes as unmounted, even though they are mounted and accessible. Restarting SoftRaid resolves this.

 

Screen Shot 2022 04 06 at 13.58.33

 

 

I saw in the support FAQs that softraid will only create one APFS container per raid set. How do I go about converting that 4TB APFS volume into 2x 2TB volumes within the container that was created? Do I go into diskutil, delete the TM volume, then recreate 2 new APFS volumes within the APFS container it shows?

Screen Shot 2022 04 06 at 13.53.57

 

Also, why does the thunderbay drives appear as the raw drive models, rather than enclosure identifiers like the mercuery elite does?

 

ReplyQuote
Topic starter Posted : 06/04/2022 1:00 pm
(@softraid-support)
Member Admin

@krustyfur 

You want "soft" volumes? We do not have the ability to support this yet, we need more APFS documentation to accomplish this.

The mount status is a minor bug, we will fix it when we have better APFS support added to SoftRAID.

 

the Mercury Elite ABCD slots have hardware identifiers, so they show in Disk Utility. Thunderbays identify all disks identically, regardless of slots.

ReplyQuote
Posted : 06/04/2022 2:05 pm
(@krustyfur)
Active Member Customer

@softraid-support Thank you for your help. While I am able to create the APFS volumes now, I can't use them because I am getting kernel panics writing to them (I see this is already a known issue). Copying 300+GB to one of the HFS+ volumes worked though. I'll just have to keep using the Mercury Elite for TM backups for now.

ReplyQuote
Topic starter Posted : 06/04/2022 3:52 pm
(@softraid-support)
Member Admin

@krustyfur 

Recreate the volume with 64k stripe unit size and see if that works. It should

ReplyQuote
Posted : 06/04/2022 5:05 pm
(@krustyfur)
Active Member Customer

@softraid-support I recreated the APFS volume as Raid 1+0, workstation optimized, 64KB stripe size (I am sure this was what it was before). I was able to copy a 270GB tarball over and then extract it without any issue.

Once I tried to use it for Time Machine (allowing it to reformat and encrypt), a kernel panic happened as soon as it tried to write to the drive, as before.

I have deleted it again, recreated as Raid 5 across all 4 disks, 64KB stripe (even though it recommended 16) and Time Machine is working, albeit horrendously slow. It was considerably faster writing an initial backup to the a single disk in the mercury elite.

 

At this point it seems that I need to go back to using a pair of separate disks in the mercury elite for Time Machine, and using the thunderbay for everything else.

 

Chris

ReplyQuote
Topic starter Posted : 07/04/2022 12:45 pm
(@softraid-support)
Member Admin

@krustyfur 

Time Machine seems to me to get worse with each iteration.

the 64k Stripe unit size is needed for RAID 5 on M1 to avoid a kernel panic (its the default for R10), but it should perform similarly to 16k. Something else was likely going on, but I have also seen Time Macihine take days to backup a modest volume (a single SSD disk, not a RAID volume).

 

ReplyQuote
Posted : 07/04/2022 2:49 pm
Share:
close
open