MacBook Pro M1 cras...
 
Notifications
Clear all

MacBook Pro M1 crash while Raid drive mounting (Softraid 6.0.5 and Mac OS 11.6)

137 Posts
8 Users
9 Likes
7,255 Views
(@softraid-support)
Posts: 8005
Member Admin
 

@rhinojeembo

Sorry that this is the best current solution I have to offer at present, until we can get a failing unit in house, as I cannot reproduce this yet. We are looking to get a "failing" system in house, if this is a Mac Mini, we could try to arrrange a swap. Let me know if this is your case. I would need more data first, of course.

 
Posted : 01/10/2021 10:01 pm
(@bendy1234)
Posts: 11
Active Member
 

@softraid-support A quick update from my experiences. When removing a disk from the 4-bay Thunderbay I still couldn't get the SoftRAID volumes to mount, so I:

1. Wiped the 4 x 6 TBs HDDs on another Mac to HFS+ drives

2. Loaded them into the Thunderbay - they were shown as 4 individual 6 TB drives in Frinder (as you'd expect).

3. Initialised them in SoftRAID

4. Attempted to create a RAID 5 array of 3 TB - there was an error "SoftRAID was unable to create a file system on this volume". However the volume showed in he SoftRAID GUI.

5. Attempted to create a second RAID 5 array of 5 TB - same error.

6. Neither volume will mount. I can disable or enable Safeguard but erasing the blank RAIDs leads to the same error. I can delete a volume.

7. At least I'm no longer getting the pink screen of death

8. Just deleted those RAIDs and created a RAID 4 array of 3 TB. Now I can't disable the safeguard on that or mount it.

 
Posted : 02/10/2021 6:09 am
(@softraid-support)
Posts: 8005
Member Admin
 

@bendy1234

Make sure the SoftRAID driver is loading. Run this in terminal before trying to create volumes. It will manually load the driver

sudo kmutil load -p /Library/Extensions/SoftRAID.kext

The reason the error creating a volume, is the driver is required to be loaded to tell macos to create the file system. This is a big sur bug.

After loading the driver manually, you should be able to create your volume normally.

 
Posted : 02/10/2021 11:16 am
(@bendy1234)
Posts: 11
Active Member
 

@softraid-support Hi. Thanks for that. That approach threw an error in Terminal that I hadn't accepted the latest update (b18) in system preferences, so sorted that and rebooted. Then Terminal told me the kext was for the wrong architecture (i.e. not arm64).

But nonetheless I tried to create a new 3TB RAID 5 volume on 4 x 6 TB HDDs in the Thunderbay 4 and it seemed to work ... before kernel panic again.

Removed 1 drive, it reboots and mounts. Put the removed drive back.

Then tried to create a new 15 TB RAID 5 volume in that Thunderbay. Kernel panic before the volume could be wholly created.

 
Posted : 02/10/2021 1:48 pm
(@softraid-support)
Posts: 8005
Member Admin
 

@bendy1234

Let me take a look at this over the weekend.

 
Posted : 02/10/2021 2:21 pm
(@softraid-support)
Posts: 8005
Member Admin
 

I have been trying to reproduce exactly what you have. Can you post a SoftRAID support file?

I want to see if I can detect anything.
I am starting with a clean Monterey or Big Sur installation, connecting various R5 volumes (APFS) and not getting a kernel panic. I would really like to be able to reproduce this problem.

 
Posted : 02/10/2021 8:59 pm
(@michaelprichard)
Posts: 24
Member
 

@softraid-support This should probably be in a separate thread, but just wanted to update that after installing 6.1b18 (from b13) I'm still getting odd permission behavior. When logged in as a user, all files/folder permissions appear as "username:staff", including folders that this user should not be able to see (permissions set to root 700). When doing the same listing as root (sudo ls -alGh), permissions appear normal, at least on the moved over HFS+ volume, and they can also be changed. On the newly created APFS SoftRAID volume/drive, all permissions are listed as owned by _unknown:_inknown and can't be altered even with sudo. "softraidtool status" shows no problems and tool and driver both at 6.1b18. Any insights would be welcome. 

 
Posted : 03/10/2021 9:21 am
(@softraid-support)
Posts: 8005
Member Admin
 

@michaelprichard

Newly created volumes, this should be fixed in b18.

You should be able to change this. Were you logged in as an admin when you created the volume? I need to figure out what was different in your scenario. Please attach a SoftRAID tech support file, and send me a terminal output for permissions for the volume. thanks

 
Posted : 03/10/2021 11:53 am
(@michaelprichard)
Posts: 24
Member
 

@softraid-support Hmm, I deleted the APFS Volume, the only one on the 2 x 960TB SSDs, recreated a new volume (all with 6.1b18) and appear to have the same permissions problems. I've enclosed the report file. This is a listing on that volume:

root@carlisle ➜  /Users/mike/Desktop 

$ l /Volumes/datatemp/                                                                                                                                                                          ~/Desktop

total 0

drwxr-xr-x   5 mike      staff      160B Oct  3 19:58 .

drwxr-xr-x   6 root      wheel      192B Oct  3 21:41 ..

drwx------   4 _unknown  _unknown   128B Oct  3 19:19 .Spotlight-V100

drwx------   3 _unknown  _unknown    96B Oct  3 19:19 .fseventsd

drwx------  47 _unknown  _unknown   1.5K Sep 30 14:42 testfolder

 

 

 
Posted : 03/10/2021 9:30 pm
(@softraid-support)
Posts: 8005
Member Admin
 

@michaelprichard

 

Did you check this specific box in internet recovery mode?

 

https://support.apple.com/en-lk/guide/mac-help/mchl768f7291/mac

Select reduced security and enable this:
Select the “Allow user management of kernel extensions from identified developers” checkbox to allow installation of software that uses legacy kernel extensions.

 
Posted : 03/10/2021 10:38 pm
(@michaelprichard)
Posts: 24
Member
 

@softraid-support Yes, I did that before installing anything on the Mac. Just checked it and confirmed that the settings are all still checked. Just to review, here's the listing of the /Volumes directory as the user:

mike@carlisle ➜  /Users/mike 

$ l /Volumes                                                                                                                                                                                            ~

total 0

drwxr-xr-x   6 root  wheel   192B Oct  4 09:12 .

drwxr-xr-x  20 root  wheel   640B Jan  1  2020 ..

drwxr-xr-x   4 root  wheel   128B Oct  4 08:42 .timemachine

lrwxr-xr-x   1 root  wheel     1B Oct  4 09:11 Boot -> /

drwxr-xr-x  12 mike  staff   476B Sep 26 09:34 data

drwxr-xr-x   6 mike  staff   192B Oct  4 09:12 datatemp

 

and then the same listing as root:

 

root@carlisle ➜  /Users/mike 

$ l /Volumes                                                                                                                                                                                            ~

total 0

drwxr-xr-x   6 root  wheel   192B Oct  4 09:12 .

drwxr-xr-x  20 root  wheel   640B Jan  1  2020 ..

drwxr-xr-x   4 root  wheel   128B Oct  4 08:42 .timemachine

lrwxr-xr-x   1 root  wheel     1B Oct  4 09:11 Boot -> /

drwxr-xr-x  12 root  wheel   476B Sep 26 09:34 data

drwxr-xr-x   6 mike  staff   192B Oct  4 09:12 datatemp

 

Even root cannot change any permissions on the APFS volume "datatemp" (RAID mirror). What's more worrying is that newly created files appear to be created with ownership "_unknown:_unknown" which seems like it's going to cause problems, or at least is a big security problem. Above, "data" volume is older HFS+ drive brought over from old server, now no longer with it's SoftRAID mirror drive, so "degraded secondary". "datatemp" is the new APFS RAID mirror volume, newly created with 6.1b18. Permissions are only respected on the Boot volume (Apple driver 256GB internal SSD). 

 

Final wrinkle; I've got an identical M1 Mini server purchased and configured a week earlier with the only difference being the brand name of the USB drive with the external HFS+ data volume brought over from the older server. It was originally configured with the older 6.0.5 SR version and only upgraded as far as 6.1b13 so far, but it's not had this permissions problem. Very confusing... I've enclosed the support file for this (working) machine here for reference. 

 

 
Posted : 04/10/2021 8:37 am
(@softraid-support)
Posts: 8005
Member Admin
 

@michaelprichard

Give me half a day and I will figure this out, or come back for more information. I do not understand this yet.

 
Posted : 04/10/2021 9:05 am
(@softraid-support)
Posts: 8005
Member Admin
 

Just saw you are using b8, not b18. That is probably the issue. Can you create a new volume with the current beta? Let me know.

 
Posted : 04/10/2021 9:10 am
(@michaelprichard)
Posts: 24
Member
 

@softraid-support Sorry for the confusion, but only the slightly older (and fully working) M1 Mini is running the older beta; I've held off upgrading it due to the issues with this newer machine. The support file I just send was of the older machine, running b8 as you noted (not b13 as I incorrectly mentioned above), and provided just as a comparison given the otherwise identical setups. The newer machine (with support file sent yesterday, a few posts earlier) has been running b18 and the volume "datatemp" was newly created with it. But note that the brought over HFS+ volume is also having this issue; old files retain their original permissions, but newly created files are assigned "_unknown" ownership. 

 
Posted : 04/10/2021 9:27 am
(@softraid-support)
Posts: 8005
Member Admin
 

@michaelprichard

There was a bug in the beta that created volumes with incorrect ownerships, but it is fixed in beta 18. Can you create the volume in beta 18 and try again?

 
Posted : 04/10/2021 12:04 pm
Page 3 / 10
Share:
close
open