ThunderBay 4 With M...
Clear all

ThunderBay 4 With MacBook Pro M1 Max Crashes Upon Startup

Active Member

I submitted a "Support via Email" form on the "Contacting Tech Support" page last week and have received zero response. So, I guess I'll try here.

Got a new MacBook Pro that obviously came preinstalled with Monterey. I have a Thunderbay 4 enclosure with 4X10GB formatted as HFS+ RAID5. It was working swimmingly for several weeks. I did a reboot (to complete clearing the system font cache) and the machine crashed after every attempted reboot (with a short flash of a magenta screen). Eventually the Mac disabled the SoftRAID extension, which allowed me to reboot but did not allow the drive to mount.

I assume this is the same kernel panic issue that is happening with other users – I've read the threads. I'm, attaching the Apple Crash Report and the SoftRAID Tech Support Info document just to confirm.

Any suggestions aside from what I've read about? Copying a massive amount of data to a spare drive (which I don't have) to restripe as RAID4 is not an option. I've already tried SoftRAID beta driver 6.2.1 b3 and that did not help. The whole "clean system install" is not a reliable option – this machine was less than three weeks out from a clean install (no system migration assistant used). Not thrilled about doing another clean install (no small task) only to have the crashes show up again in a couple of weeks.

 EDIT: I cannot seem to attach the Apple crash report. I get a forum error "Filetype Not allowed" for either a .rtf, .txt or .zip file. I'll paste the first bit of the crash report here:

panic(cpu 0 caller 0xfffffe001c0d2ba8): "dart-apciec2 (0xfffffe150e864800): DART(DART) error: SID 1 write protect exception on write of DVA 0x80074000 (SEG 0x40 PTE 0x1d) ERROR_STATUS 0xb0100010 TIME 0x1764ab17d6 TTE 0xb101ef51001 AXI_ID 0" @AppleT8110DART.cpp:1694
Debugger message: panic
Memory ID: 0x6
OS release type: User
OS version: 21A559
Kernel version: Darwin Kernel Version 21.1.0: Wed Oct 13 1701 PDT 2021; root:xnu-8019.41.5~1/RELEASE_ARM64_T6000
Fileset Kernelcache UUID: 3B2CA3833A09A383D66FB36667ED9CBF
Kernel UUID: 67BCB41B-BAA4-3634-8E51-B0210457E324
iBoot version: iBoot-7429.41.5
secure boot?: YES
Paniclog version: 13
KernelCache slide: 0x00000000130d4000
KernelCache base:  0xfffffe001a0d8000
Kernel slide:      0x00000000138fc000
Kernel text base:  0xfffffe001a900000
Kernel text exec slide: 0x00000000139e4000
Kernel text exec base:  0xfffffe001a9e8000
mach_absolute_time: 0x8fc3b4c8
Epoch Time:        sec       usec
  Boot    : 0x61afa9f0 0x000d0e63
  Sleep   : 0x00000000 0x00000000
  Wake    : 0x00000000 0x00000000
  Calendar: 0x61afaa4c 0x000ab3cd
Zone info:
Foreign   : 0xfffffe00222e0000 - 0xfffffe00222f4000
Native    : 0xfffffe100076c000 - 0xfffffe300076c000
Readonly  : 0 - 0
Metadata  : 0xfffffe817ca74000 - 0xfffffe81889f0000
Bitmaps   : 0xfffffe81889f0000 - 0xfffffe81a0220000
CORE 0 PVH locks held: None
CORE 1 PVH locks held: None
CORE 2 PVH locks held: None
CORE 3 PVH locks held: None
CORE 4 PVH locks held: None
CORE 5 PVH locks held: None
CORE 6 PVH locks held: None
CORE 7 PVH locks held: None
CORE 8 PVH locks held: None
CORE 9 PVH locks held: None
CORE 0 is the one that panicked. Check the full backtrace for details.
CORE 1: PC=0xfffffe001ab1e864, LR=0xfffffe001ab1e7fc, FP=0xfffffe611884b650
CORE 2: PC=0xfffffe001ab5d9b0, LR=0xfffffe001ab6c17c, FP=0xfffffe603275be80
CORE 3: PC=0xfffffe001ab5d9b0, LR=0xfffffe001ab6c17c, FP=0xfffffe603280be80
CORE 4: PC=0xfffffe001d56d534, LR=0xfffffe001d53f6ec, FP=0xfffffe603272b3a0
CORE 5: PC=0xfffffe001ab5d9b4, LR=0xfffffe001ab6c17c, FP=0xfffffe60328f3e80
CORE 6: PC=0xfffffe001ab6c188, LR=0xfffffe001ab6c184, FP=0xfffffe6118cf3e80
CORE 7: PC=0xfffffe001aa6ec6c, LR=0xfffffe001aa6ec6c, FP=0xfffffe61015d3ef0
CORE 8: PC=0xfffffe001aa6ec6c, LR=0xfffffe001aa6ec6c, FP=0xfffffe6118ce3ef0
CORE 9: PC=0xfffffe001aa6ec70, LR=0xfffffe001aa6ec6c, FP=0xfffffe6118b93ef0
Panicked task 0xfffffe1519b9fbe8: 1806 pages, 10 threads: pid 559: sharingd
Panicked thread: 0xfffffe150ee04000, backtrace: 0xfffffe603e6af4d0, tid: 6036
 lr: 0xfffffe001aa3a488  fp: 0xfffffe603e6af540
 lr: 0xfffffe001aa3a158  fp: 0xfffffe603e6af5b0
 lr: 0xfffffe001ab76558  fp: 0xfffffe603e6af5d0
 lr: 0xfffffe001ab692d4  fp: 0xfffffe603e6af650
 lr: 0xfffffe001ab66c9c  fp: 0xfffffe603e6af710
 lr: 0xfffffe001a9ef7f8  fp: 0xfffffe603e6af720
 lr: 0xfffffe001aa39dcc  fp: 0xfffffe603e6afac0
 lr: 0xfffffe001aa39dcc  fp: 0xfffffe603e6afb30
 lr: 0xfffffe001b238748  fp: 0xfffffe603e6afb50
 lr: 0xfffffe001c0d2ba8  fp: 0xfffffe603e6afdb0
 lr: 0xfffffe001c0d2544  fp: 0xfffffe603e6afe50
 lr: 0xfffffe001c0d1da8  fp: 0xfffffe603e6aff00
 lr: 0xfffffe001b15d71c  fp: 0xfffffe603e6aff40
 lr: 0xfffffe001bbf5014  fp: 0xfffffe603e6affd0
 lr: 0xfffffe001ab69e0c  fp: 0xfffffe603e6affe0
 lr: 0xfffffe001a9ef86c  fp: 0xfffffe603e6afff0
      Kernel Extensions in backtrace:[720C6E12-91B9-3AF9-BDCE-D1060D7B0534]@0xfffffe001c0cc580->0xfffffe001c0d5d67
last started kext at 746323153: 6.0.0 (addr 0xfffffe001a7df1a0, size 3432)
loaded kexts:
com.softraid.driver.SoftRAID 6.2.1b3
Topic starter Posted : 14/12/2021 2:26 pm
Member Admin

I tested the forum support links today and they worked, I don't know what happened and I am sorry about that.

You can attach .txt files. Note that text edit often saves txt as txt.rtf or something. Maybe that was it.

This is the same panic we have been working on. I am hoping a resolution comes soon, we have a reproducing computer inside Apple engineering, so progress should be forthcoming.

I have not seen (to my knowledge) a "clean" install of Monterey with this panic on M1, but it is certainly possible and easily something that occurred or was installed after purchase. We do not know the trigger yet.

Does your system crash if you remove one drive at startup? Can you insert that drive after a few minutes and it rebuilds with out crashing?

Posted : 14/12/2021 8:55 pm
Active Member

If I boot with one of the mechanisms disconnected, the machine boots fine and mounts the volume. When I reattach the disconnected mechanism, SoftRAID starts rebuilding the volume.

I tried again and cannot seem to successfully attach a .txt file to a forum message. This is a pure text file saved out of BBEdit. It won't let me attach a standard .PNG Screen Shot either.

Topic starter Posted : 16/12/2021 12:46 pm
Member Admin

I just added a text file from BBedit. PNG's are not addable at present, save as .jpg instead.

Posted : 16/12/2021 1:49 pm
Active Member

Just an update... I tested the suggestion of unplugging one mechanism before connecting the Thunderbay to the Mac. This allows the RAID to mount. SoftRAID successfully rebuilds the array after the 4th mechanism is re-inserted. This is useful if i need to get a file off the RAID, but it's obviously not a reasonable ongoing solution to remove a mechanism upon every reboot.

Also, I updated to Monterey 12.1 and this had no effect on the crash (others had posted that this fixed the issue for them?)

Waiting (patiently) for a proper fix to this significantly impactful issue.

Topic starter Posted : 27/12/2021 6:55 pm
Member Admin


We are waiting "impatiently" for this to get resolved. Hopefully soon. Sorry for all users who need to do this, but we are in a holding mode for a short while longer.

Posted : 27/12/2021 10:25 pm
Active Member

Hey SoftRAID Support,

I decided to spend the (arduous) amount of time backing up the contents of my RAID5 (all 10TB of it) to external bare drives in order to reformat my Thunderbay 4 as RAID4 as you have recommended in this forum so I wouldn't have to keep pulling drives and rebuilding the RAID just to get it to mount. You indicated in previous posts that there should be no impact on performance.

After doing this, the write speeds have dropped by 27%. The read speeds increased by 11%. This is hardly "no impact". See attached AJA speed tests.

RAID 5 Speed Test
RAID4 Speed Test



Topic starter Posted : 28/01/2022 7:59 pm
Member Admin



Hmmm. Should not have been, it probably depends on the drives. Attach a SoftRAID support file, I can see if I have access to your drives.

On the other hand, since 80% of activity on volumes are reads, it should work out well enough.

Posted : 28/01/2022 10:12 pm
Active Member
Posted by: @softraid-support



Hmmm. Should not have been, it probably depends on the drives. Attach a SoftRAID support file, I can see if I have access to your drives.

On the other hand, since 80% of activity on volumes are reads, it should work out well enough.

Well, obviously I don't mind increased read speed... but not at the expense of write speed. Tried more tests after a reboot and copying the data back to the RAID. See attached screengrabs and the SoftRAID support file. Note that the Blackmagic test shows higher throughput. But the AJA test still shows a reduced write performance.

Screen Shot 2022 01 30 at 9.43.14 PM


Topic starter Posted : 30/01/2022 9:07 pm
Member Admin


The drives are good models, no problem there.

Can you live with it until Apple fixes/gives us a fix for the RAID 5 issue?

Posted : 30/01/2022 9:26 pm