Running Softraid 7, it sees my external Raid 5 (it's in the list), reports no errors, but doesn't mount the drive either when computer is turned on or when I right click and click on "mount," at which point it intermittently says "mounting" but it never mounts the drive, and finally returns to "unmounted" status. I've been holding off on installing version 8 until it becomes 8.1 out of an abundance of caution. Any advice?
Attach a SoftRAID tech support file, and I can look. The volume directory may be damaged.
There is no need to "wait" on SoftRAID upgrades, as the MacOS version determines what driver is loading ,i.e. Sonoma 14.4 and up loads the 8.0 driver regardless of what SoftRAID version you launch.
It was the volume directory. I ran DiskWarrior and it repaired the directory, and it mounted. Actually, the directory was so damaged that there were damaged files, so I am wiping the drives, certifying them, and restoring the data from a backup drive.
Thanks!
Chris
So it turns out two of the drives were bad—I had to certify all four to find that out. To be on the safe side, I copied off the data, broke the raid, and tried to certify them. It's a shame that SoftRaid said there were no errors. I guess there's only so much the software can see in real time.
What SoftRAID can track in real time are "IO errors". the upside is all IO errors are logged, the downside is by nature, there are many "false alarms", as there are many ways for a disk to reply with an IO error, from bada connection to damaged directories, which point to a non existent location on the disk. But an IO error does give a clue that something is wrong.
Predictive failure is pretty good on HDD's, there are no "false alarms" when a disk has reallocated sectors.
Flash media is more difficult, we are still working on better ways of predicting failure on flash media.
And sometimes drives fail suddenly, with no warning at all, such as when a head crashes/breaks.
Its kind of an art, in addition to being a science.
Thanks for that. It's clear you guys take what you do very seriously and put in tremendous effort, and it's much appreciated.
I have what may be a new issue or a related issue. New because it is a new set of drives in another RAID 5, potentially related because the issue is happening at the same time as the other one. First: do you know if the CalDigit TS3 has any issues with OSX Sonoma and RAID volumes? I ask because when plugging this new problem drive in through the TS3 I got an error message that it was trying to pull too much power. And that's when I got an i/o error on a drive in this RAID. (This is an OWC enclosure.)
In any event, the issue I'd like your help with is that one of the drives is showing an i/o error and I would like to replace it. However, when I try to remove the drive from the RAID I get a message from SoftRAID that an error has occurred and the software is blocked from accessing one or more disks. I have made sure that softRAID has Full Disk Access (it did not the first time this happened), and I've closed all other software that I can. I upgraded to SoftRAID 8 as well (and yes, given it full access). I'm attaching a screenshot of the error message.
I've watched and followed the video tutorial, but as I said this error message pops up.
That error message from the dock is often a cable issue.
Attach a SofrtRAID tech Support file.
The problem with removing the drive was simple -- the volume was still mounted. I was able to remove it after unmounting the volume. I'm including the report anyway, in case you see anything with respect to the underlying cause of why I had two issues, one after the other.
Did MacOS just auto update?
When did you first see this EFI volume showing?
We started seeing this recently, often with a MacOS upgrade that wipes out the beginning of the volume.
No, I haven't done the update yet. I choose to do them manually and tend to wait a bit just in case there are issues. Okay, so it's complex. I had this older 3.0 USB OWC Elite Pro enclosure with Toshiba drives in it. They are 2015 drives. I certified the drives in that enclosure, created a RAID 5, put data on them. I decided to move them to another OWC Enclosure with a 3.1 Gen 2 connection to have them in a newer enclosure. I plugged that into a 3.1 slot on a CalDigit TS3 connected to my 2019 iMac. I got an error message from OSX that it was trying to pull too much power through the accessory. I also got error messages for the drives from SoftRAID that the drives were bad (or something like that. Honestly I don't recall for sure). So I put the drives back in the USB 3.0 enclosure, connected the drive as the first time, and got an i/o error on one of the drives AND THAT'S ALSO WHEN I first saw this EFI volume.
Now I'm rebuilding the RAID. Not sure what I'm going to do with it next though. This currently has a copy of the data that was on first drive that I originally had trouble with. The data is back on that drive again, so this is a second copy. Both of these have very old hard drives (2015-2017) in them and should be refreshed with newer ones. I think I'll buy 4 fresh drives and create one RAID 5, and get rid of the old drives. I'm researching the most cost-effective choice now. The solution does not have to be for speed, just storage. Any suggestions on approach? Thanks for everything.
So you know, although it is too late for this, USB on the Mac is not as robust as it should be. Thunderbolt is the better solution.
I wonder if you had a bad cable on the enclosure to the dock, or from the dock to the computer. (or poorly connected)
Also, if you do not have the high power charging enabled (requires an extension), I suppose that could have contributed, even though in theory it should not make a difference for drives. Last idea, one of the drives could have been pulling excess power during power up.
either way, MacOS should never have done that damage (EFI volume header overwriting your volume header) to the partition map. I think this is a recent MacOS bug, as it has never been a thing, now we have seen it a few times.
Yes, moving, as I can, from UCB to Thunderbolt. Thanks for the analyses and for all the help!
I am experiencing the same thing as the OP...
Currently certifying my 4 18TB HDDs that are supposedly renewed. All in a new thunder bay enclosure. Have a feeling they might fail. Any idea how long it takes to certify with a 1 pass? It's been going for about 12 hours and the "timer" doesn't seem to be moving? Seems to be hovering around 39:10:00....Am I supposed to read that as 39 hours and 10 minutes?
A certify works because it is a thorough test. It takes about a day per 2TB, so on 18TB drives, expect 9 days. Sorry. ,but you are testing the entire surface of the drive for flaws, that may not show up until you store data on that portion of the disk, which could be a couple years from now.
this is why drives are sold "untested", it is too time consuming.
One undervalued service OWC does when selling pre-populated enclosures, is pre-certify all disks sold in such enclosures. We actually have several rooms dedicated to certifying disks in the actual enclosures they will be shipped in. This greatly reduces failure rates for users.
Because drives are often in short supply, this is not something we can do for standalone drives, as imagine when we get a new shipment of 18TB drives, it would be 10 days before we could sell them, and there are low profit margins in raw mechanisms.
Its possible we will offer this someday.
@softraid-support
WOW! This was super insightful. Thanks for the info! That makes a lot of sense!

