@softraid-support I've already done that once, but I did use the same Volume name, so I've just done it again and this time used a different volume name, "test". I'm still unable to change permissions on that volume, or, even the older (formatted with 5.x SR originally) HFS+ volume. See console transcript below; the "chown" command has no effect on ownership of the new volume (or any test file on the older HFS+ volume).
To summarize: b18 has still not fixed my permissions problems; same model/OS machine with b8 works fine.
root@carlisle ➜ /Library
$ l /Volumes /Library
total 0
drwxr-xr-x 6 root wheel 192B Oct 4 15:22 .
drwxr-xr-x 20 root wheel 640B Jan 1 2020 ..
drwxr-xr-x 4 root wheel 128B Oct 4 14:44 .timemachine
lrwxr-xr-x 1 root wheel 1B Oct 4 09:11 Boot -> /
drwxr-xr-x 13 root wheel 510B Oct 4 09:20 data
drwxr-xr-x 4 mike staff 128B Oct 4 15:22 test
root@carlisle ➜ /Library
$ chown -R root:wheel /Volumes/test /Library
root@carlisle ➜ /Library
$ l /Volumes /Library
total 0
drwxr-xr-x 6 root wheel 192B Oct 4 15:22 .
drwxr-xr-x 20 root wheel 640B Jan 1 2020 ..
drwxr-xr-x 4 root wheel 128B Oct 4 14:44 .timemachine
lrwxr-xr-x 1 root wheel 1B Oct 4 09:11 Boot -> /
drwxr-xr-x 13 root wheel 510B Oct 4 09:20 data
drwxr-xr-x 4 mike staff 128B Oct 4 15:22 test
@softraid-support Ugh! My apologies! I had the same issue and had even restarted a couple of times with the prior test, but in this latest test, I decided to also check a few settings in SR prefs and noted a couple that were set incorrectly, but which I didn't think were relevant: "Enable SMART on USB disks" was unchecked, so I enabled it. Under the "Volumes" tab, "Mount volumes when user not logged in" was also unchecked, so I enabled that as well. One or both of these changes required a restart, coupled with the System extension "Blocked" message, so I restarted. Now, permissions are working normally on both SF external drives.... let me know if there's anything else that would help dissect this, but my apologies if I'd set something incorrectly. I'm pretty confident both of these flags had been set previously, but that was likely in the much easier beta.
I do not understand how either of those preferences could make a difference. It could have been something off about the driver being partially blocked, but I am not sure what. glad you are working and so far, the only one with this problem!
Hi - I previously sent a SoftRAID support file via WeTransfer?
I have no issue with the Thunderbay 8, but it's when also attaching the Thunderbay 4 with SoftRAID volumes that I have pink-screen-of-death issues. Both Thunderbays are attached to the USB-C / Thunderbolt port on the rear of my M1 Mac mini. The issue isn't related to APFS (none of the volumes I've tried recently are APFS) nor is it Raid 5.
This issue only began with Big Sur 11.6
Let me know if another support file is helpful but my preference is not to crash my Mac on purpose.
Thanks
We are still working on the cause of this. I have your user ID and info.
I am hoping to get a users entire computer/drives (in a swap for new hardware) so we can drive it down to Apple for diagnosis. that is what it looks like it is going to take to resolve this.
@softraid-support Thanks. If I can help at all, let me know. Unfortunately I'm based in the UK so I'm not sure a swap of my kit would be vey practical.
Hey there - just found this thread after having identical issues with mounting a RAID5 OWC Thunderbay 4 to an M1 Mac Mini - running Softraid 6.1 and Big Sur 11.6. If i boot up with any combination of the three Thunderbay 4's I have (one, two or three at once), I get a pink screen and crash. I can start it without any drive attached, but as soon as I connect one it will crash a few moments after I see Softraid mount the drives. It's anecdotal, but it feels like this behavior was there with Softraid 6.0.5, though random and sparse - happened a few times but it would seem to "go away". However, since 6.1 it is persistent and consistently crashing. Have tried multiple cables and get the same result. I'm able to mount a RAID0 OWC Mercury Elite Dual over Thunderbolt with a 2->3 Adapter without issue.
I'm going to send you a zip containing the requested Mac OS report, the Tech Support Report and the Sys Diagnose archive as you've requested from the OP, hopefully it helps narrow down what seems like an isolated problem.
Has anyone tried to duplicate the problem on a Monterey beta?
Here you go.
I have not heard, but we are waiting for a system in house to test with.
Ok thanks. Do you want my Sysdiagnose files or will nothing really add to the investigation at this point? Is there a sense that this could be a hardware defect to the thunderbolt bus on some M1's as opposed to a software issue? I ask because I wonder if Apple would simply replace the unit under AppleCare if so.
Its challenging to understand this. I am getting the test system soon, but it has not shipped to us yet.
My guess is there is a macOS bug that somehow is triggered by something in the volume. Its also possible this is a M1 bug. All I can tell is the trigger is the Thunderbolt bus is crashing the computer.
I am having the exact same issue as @rustrx , as soon as the thunderbay4 tries to mount I get a pink screen and the computer crashes (M1 MacBook Pro). computer works fine without thunder bay plugged in. Have spent lots of time on the phone with OWC support but they couldn't figure it out. Any suggestions?
I have attached my soft raid log.
Thanks,
Russell
Was it working for a while, then the crashes started? If you look at the crash log, is this in the first line? dart-apciec1
(that is the thunderbolt controller as best I have found so far)
If you remove one of the disks (I am guessing it is RAID 5), does the volume mount?
@softraid-support yes it was working for about 6 months with no problems, and suddenly started crashing my machine. not sure if there was an update somewhere in the background that caused it.
I don't know how to check the crash log for that line, but when I remove a drive the volume does mount!
what does this mean?
thank you!!

