Raid 10 (1+0) vs. Raid 5 – A Slightly Different View

Many people don’t realize that there computers (in most cases) only contain 2 or 3 moving parts.

  • The fans
  • The CDRom/Disk Drives
  • The Hard Drive

Even if they do realize this, they fail to realize that everything that moves has friction, and eventually will fail with improper maintenance.

In my last 2 posts, I talked about my backup’s with Backblaze… but today I want to cover why I run a 4-disk Raid 10 (1+0) as opposed to a Raid 5.

There are 100′s of articles on this very topic. Most will speak to benefits of Raid 10 being that it’s generally faster than Raid 5, and more resilient to multiple disk failure. You’ll hear people say Raid 5 is better for cost, and if properly maintained, just as reliable as Raid 10. From a pure, replace-a-disk standpoint, I would tend to agree… Both are pretty evenly matched. In a pure storage situation, I would (and have) normally made the decision to go for a Raid 5 array.

But there are 2 things I’ve run into with Raid 5′s that have me saying “No” to them from now on.

The first is migration.

While it’s not a normal behavior to migrate an entire Raid to a new set of drives… It does occasionally happen, and in my case, it seems to happen about once every 3 years. Prior to this year, I was running a raid card that was only capable of having 4 drives (this is true of many onboard raid cards). When in a Raid 5 array, the only method to migrate the data was to find a data source larger than my raid (often tough at the time), copy it there, and then copy it back once the new raid was built. This slow process would work… but was often slow, and depending on how I did the disk copy, risked missing a file (or if it was an OS partition, losing my OS).

With Raid 1+0 you have another option… You can break the array, remove 2 drives, replace them with 2 larger ones, wait for the rebuild to finish, then break the other half of the array, replace with larger drives, resize your array and your parition, and wha-la… You can replace the drives with little to no downtime.  It even allows you to keep 2 drives in storage as backups while the array is being copied. Heck, you don’t even have to do both at once… you can play it safe and just do 1 at a time…

In theory, you could try something similar with Raid 5… replacing 1 drive at a time… but because of the failure rate, and speed at which the duplication happens… I would be very very concerned if something went wrong. I’m also not sure how well raid cards handle scaling up Raid 5′s when adding new devices (never personally tried).

The second reason I now avoid 5′s is actually the same reason I avoid Raid 0′s. Even with Backblaze, I often still want to try to recovery data from the drives themselves (not sure why, just seems to work out that way… then again, I haven’t had this problem since having Backblaze, so maybe I lie). In a raid 5 array, and with the cheaper controller cards put on the market, when rebuilding the 4th drive, if a sector goes bad anywhere in the process, you can often lose the entire array… Even if the system was running until then. Nicer cards can get around this, and keep building despite the issue, just marking that entire area bad. But the point is this: once the raid card thinks the raid is bad… your in big trouble… Even if you know 99.9% of the data is still there.

Thankfully, I have found (and have) software for recovering raid 0 type data from multiple drives. You simply tell it what drives to look at, and go from there… However, the software only works if you can match up the data properly. Raid 0 and Raid 10 make this fairly easy, as the data either matches or it doesn’t… In the case of Raid 5, the software I have will only work with the 3 drives that in theory don’t contain the checksum. While there may be software out there to recover data from Raid 5 arrays, I’m guessing it’s harder to find, and probably more expensive (if someone has built it).

In any case, my complete/near complete data loss count over the last 10 years has been this:

  • Raid 0 – 3 Times (although 2 of these were both predicted, and expected, and protected with Backblaze)
  • Raid 1 – 1 Time (And to this day, I don’t know how both drives failed as badly as they did)
  • Raid 5 – 2 Times (Both times it happened during rebuild… which is why I don’t trust it, 1 was on cheap hardware, the other was on server raid hardware)
  • Raid 10 – 0 Times (I’ve had 2 drives fail so far, but no complete failures, and have performed the migration I mentioned above twice as of this weekend).

I hope to try Raid 6 someday soon… but for the moment, I can’t really justify the on-paper performance hit. Would also love to try a Raid 50 or Raid 60 at some point too… Maybe when I build myself one of those Backblaze storage enclosures (Pods).

One final note for people using Google, and myself, should I ever spend an evening trying to solve this problem again… At the time of writing this, the Plextor M3 Series SSD drives do not appear to be compatible with the Adapetic 6805 raid card. The raid card is able to recognize 1 drive at a time… but not both plugged in at once. This is on Firmware 1.04 for the SSD’s, and 18668 on the 6805. I did manage to get it to recognize them briefly on cold boot a few times, but I couldn’t get it to be done reliably. I would also assume this would affect Crucial M4′s. That said, if your looking into buying SSD’s for the Adapetic 6805, I would recommend shopping from their compatibility list (found here).

 

Comments are closed.