@steve223
Unfortunately, its not our problem to solve, its Apple's. They have made progress in recent macOS's, and I saw rumors that the next macOS may have a new "energy savings" layer, so maybe some of these sleep/hibernate issues may go away.
Also, the terminal command:
sudo pmset -a hibernatemode 0
Has helped some.
Others have reported deleting the sleep image and restarting has helped:
sudo rm /var/vm/sleepimage
@softraid-support IIt may be Apple's problem to solve and I'm not disputing that. However, I have eight Thunderbolt 4 hard drives connected, and unfortunately the OWC devices are the ones with by far the most disconnections and it does not really happen that much to other devices so there must be some part which is on OWC side.
We are the "reporters". SoftRAID aggressively reports all such events. MacOS generally does not report this, it mostly ignores sleep disk ejects, that was how Apple "fixed" this, so you are unlikely to see this on Apple formatted volumes, except perhaps when there are open files.
I agree its frustrating.
You would be keeping your farm. 😉 I had a look at the logs, and the good thing was that I had allocated three SATA SSD lanes per Thunderbolt port, because the logs showed that it was always a group of three SSDs that disappeared, which resulted in the unmount. My guess is that the culprit was the Mac Mini's Thunderbolt power negotiation. The M4 Mini's Thunderbolt controller & power management is designed to handle up to two bus-powered enclosures on the three rear ports. (At least so they say.) But those M.2-to-SATA adapters (with the ASM1166 chipset) probably drew too much power after all. I'm now using a dual-M.2 enclosure by Anyoyo, which has its own 100W power supply, and its USB4/TB4 chipset can bifurcate PCie x4 to x2+x2, which is exactly what I need anyway for the two M.2-to-SATA adapters, and the storage pool has been running stable for 24 hours now. Interestingly, even with the added powered device, the total system power decreased by up to 5W, which shows how much the Mac Mini was struggling with my original setup.
Thanks for the feedback. I am surprised the Mini has such low specs on this, but it is designed to be a "tabletop" home system. I will pass this to engineering, in case they can use this info.
@softraid-support My original setup actually had three bus-powered M.2 enclosures, two for two ASM1166 SATA adapter/expansion cards, and one for a standard M.2 NVMe SSD. The latter, however, was daisychained behind the first Anyoyo/Acasis enclosure, which is why I thought that the Mini's Thunderbolt configuration wouldn't care about that one, especially with regard to power management. And I was probably right, because after detaching that daisychained enclosure, the first two bus-powered ones with the SATA adapters still always dropped a group of three SSDs at some point. So the Mini's Thunderbolt power management really seems to be configured very tightly, basically allowing nothing but M.2 SSDs in bus-powered enclosures, and a maximum of two bus-powered enclosures anyway. It could be different for the M4 Pro Mac Mini, and it is probably different for the Mac Studio, but my suggestion to people who plan on building a DIY Mac Mini Server (see also the article "Mac Mini as a Low Idle Home NAS" on michaelstinkerings dot org) is to use SATA adapters in an enclosure that has its own power supply. Going with only one adapter (6 SATA ports via PCIe 3.0 x2) in a single bus-powered enclosure could work, I guess, but for peace of mind you should go with something that has its own power supply. At any rate, setting up SoftRAID and a RAID5 was super easy, and I'm really glad that we have this option on macOS: thank you! (By the way: any plans on adding RAID6 support? I personally don't need it, because I have an SSD storage pool, whose rebuild times for 4TB disks are probably fast enough, but it could be interesting for people using HDDs.)
An update on my setup, just fyi in case anyone wants to hack together something sinilar. I have now upgraded from the M.2-to-6xSATA adapters (ASM1166 chipset) to M.2-to-2xSFF-8087 adapters (using the newer RTL9101 chipset), with 8 SATA lanes each for a total of 16 SATA connections via two adapters. (One option is the adapter branded "Lekuo".) I've only had the new setup for a few hours, but it seems to be running as stable as the ASM1166 adapters before. (Will post updates, if that changes.) However, power draw of the whole system during idle is about 1–1.5W lower than with the ASM1166 adapters, and it seems to play more nicely with the Anyoyo dual M.2 enclosure: much more realistic read speeds (ca. 1.7–1.8 GB/s for a RAID5 with 6 SSDs), which seems to mirror the fact that the Anyoyo needs to halve the TB bandwidth, i.e. x2 PCIe bandwidth per M.2 adapter. With the ASM1166 adapters I received occasional bursts up to 2.2 or 2.3 GB/s, which never sat well with me. So the slightly lower speeds could be a sign that the RTL9101 seems to be more stable in my setup. PS: if anyone plans to hack together something similar, I can't yet say which chipset is better or more stable: ASM1166 or RTL9101. Just don't go for adapters using the JMB585, because these (according to Michael's Tinkerings) use the JMB575 port multiplier instead of chipsets like the RTL9101 and the ASM1166, which provide independent native SATA controllers with dynamic PCIe bandwidth negotiation.

