Self Infected Error - Now Volume Won't Mount
SoftRaid 6.05, driver 6.05, 2018 Intel Mac mini, macOS 11.5.2 , ThunderBay 4, RAID 5.
One volume in the array was being used for Time Machine when the ThunderBay was inadvertently powered down. "Oct 02 14:47:37 - SoftRAID Driver: A disk (disk2, SoftRAID ID: 08A82B777A986D00) for the SoftRAID volume "RAID Time Machine MM" (disk10) was removed or stopped responding while the volume was mounted and in use."
Figured the Time Machine file was toast, and I'd just restart. Time Machine refused. Two other volumes in the array appear to be OK. If the Time Machine volume was damaged, I figured the loss wasn't terrible (I have daily image copies of the Macintosh HD as well), so I'd just erase using SoftRaid and start over. Had to disable Safeguard in order to erase. Time machine continued to refuse to use the volume.
Volume was observed to be unmounted, refused to mount and was "missing" in Finder. Attempted validation, but that did not complete and drive remained unmounted.
At some point, after several reboots, the volume appeared in Finder, but remained unmounted in SoftRaid. Gave up, <laugh> exited SoftRAID, erased volume using Mac Disk Utility. Still refused to mount in SoftRaid. Ran validation, "There were 0 blocks which were updated. All parity data is now correct." Still refuses to mount.
Currently the volume is visible in Finder, but will not mount in SoftRaid and Time Machine will not use it. The data is thoroughly gone (I think)
Notwithstanding the question of what sort of self-inflicted error I had incurred, I think I'm down the rabbit-hole on this one. Is it OK to (try to) delete the volume and create a new empty volume or is there something else more appropriate to do? Anything to get a volume that Time Machine will use again.
I think you just ran into a undocumented Time Machine issue. On Big Sur, new volumes must be APFS. I think you probably had created your volume under Catalina or earlier and time machine for some reason can use older existing HFS Time Machine volumes, but any new volume must be APFS. So you need to create an APFS SoftRAID volume, then you can use it.
You need the beta of SoftRAID to create APFS volumes, however, until 6.1 is released.
Thank you, thank you, thank you!
I was able to format the volume to APFS using the beta, but still couldn't get the volume to mount. Got distracted (by life...) for a couple days and just received notice that 6.1 was available. Installed 6.1 and, strangely, SoftRAID showed the volume as FAT (case sensitive). But Finder still said it was APFS so I pressed on. SoftRAID mounted the volume successfully and is now shown as APFS. <hooray>. Kicked off Time Machine and back up is currently running normally.
I suppose I should clean up the two other HFS+ volumes in the array and reformat hem to APFS before they cause problems. Thanks again.
I'm back... The RAID5 array had three HFS+ volumes. One for Mac mini Time Machine, one for MBP TM (not currently used), one for work space when moving the bulk of my work from the MBP to the mini.
Using the beta, I previously erased the TM volume used by the mini and initialized it as APFS. After SoftRAID 6.1, I was able to mount and use the volume for TM.
So... I wanted to erase the other two volumes and initialize them as APFS also, but SoftRAID 6.1 will does not offer APFS in the ERASE function and the Apple Disk Utility always reports them in use.
Creating a new empty APFS Container
The volume “RAID Time Machine MBP” on disk9 couldn’t be unmounted because it is in use by process 71 (fseventsd)
Couldn’t unmount disk. : (-69888)
In Safe Mode, the RAID 5 array and it's volumes are not shown in Disk Utility. Back to normal operation (un-safe mode?) the volumes are back and Disk Utility still complains they are in use. I can bring up SoftRaid and unmount them, but then they "disappear" from the Disk Utility.
- - -
1. Is it possible to have more than one APFS container in a single, RAID 5 array?
2. Is there a way to erase and reinitialize the other two volumes as APFS?
3. Or do I need to chuck it all, delete everything, reinitialize the RAID 5 array and define new volumes?