To kick off the new year, we recently released our Storage Predictions for 2009. We’ve received a lot interest in this list since we released it, and I personally have been asked about prediction number 3, “RAID will Hit a Data Dead End”. Allow me to explain.
For this prediction, we say:
RAID Nears Retirement. As multi-tiered storage continues to evolve, SANs will become more complex, unified networks will emerge, and as newer and larger drive technologies such as 1 TB drives take root, RAID as a data protection technology will become irrelevant. Advanced data protection schemes based on Erasure Coding technology for long term reliable data storage will take hold putting additional pressure on legacy solutions depending on RAID.
RAID is a technology that has served us well, but there are two ways in which it fails to scale going forward. Most importantly, RAID technologies today have serious problems with large capacity drives, like the 1 and 1.5 TB drives shipping now. These problems will only become more pronounced with the 2 TB drives soon to be available.
First, RAID has an issue with the bit error rates on high-capacity drives, a problem I discuss in detail in our video “The Trouble with RAID”. The bit error rate is the rate at which a drive will fail to read a block. These failures are not due to complete spindle failures, but due to the statistical encodings used to store bits into magnetic domains on the drive. Drives don’t incur the penalty of read-after-write to verify the data written, so sometimes they manage to write data that cannot be read later despite the sophisticated error correcting codes used to protect the data on disk.
As I explain in an earlier post, the bit error rate of the drives can be catastrophic for RAID. In a RAID 4 or 5 rebuild it is necessary to read every bit off all the remaining disks. There’s a high probability this may not be possible with high-capacity drives. In RAID 6, the same problem occurs in the event of a double failure.
This very problem was raised at the recent Gartner Data Center Conference, in “The Enterprise Storage Scenario” by Roger Cox and Dave Russell. RAID is not a technology that is going to survive with higher and higher capacity drives, and enterprises must look to technologies like advanced erasure coding to meet data protection requirements.
Permabit Enterprise Archive protects against this pitfall within our RAIN-EC storage architecture. By recording additional recovery information we can rebuild from up to 8K of unreadable data without having to fail a drive or recover from another location. Even in the event of multiple failures you’re still protected against the bit error rate, something that RAID can’t do.
The second problem RAID faces are increased rebuild times. While drive capacities continue to grow exponentially, drive read performance does not. The read rate is dependent upon drive spindle speed and linear bit density. Large capacity drive spindles aren’t spinning any faster, with all of them in the 5400 or 7200 RPM class. Bit density is going up, but read rates only improve with the square-root of the rate at which capacity increases, because capacity gains from increases in two dimensions (around the disk and across it) while read rate gains from only one.
This means that RAID rebuilds take unacceptably long times on high capacity drives. Consider a rebuild at 25 MB/s for a set of 2 TB drives — this will take more than 22 hours! Can your data be without protection for nearly a full day?
Permabit Enterprise Archive’s RAIN-EC architecture helps here as well. While in a RAID system a group of drives constitute a set, RAIN-EC distributes data in a more sophisticated manner. The recovery information for data on one drive is spread evenly across all the other drives in the system. This means that in the event of a drive failure all the other drives participate in the reconstruction process, and each drive is only responsible for a small portion of the recovery. Thus, the rebuild rate goes up with each additional drive in a RAIN-EC system. With RAID, adding more drives always makes the rebuild rate go down (or stay the same).
That’s the pressure from the high capacity side, but RAID arrays, at least for disk, have serious pressure from the other side too — Solid State Drives (SSD). SSDs massively outperform low capacity 10K and 15K RPM drives, and within 18 months they’ll be at an equivalent price. Additionally, STEC tells me that their Zeus SSDs have bit error rates as low as 1 in 10^17, which protects in rebuilds significantly better than the equivalent 15K RPM drives.
Given reliability concerns when using high capacity disk drives and the end of road in view for 15K RPM performance-oriented disk, RAID arrays are being squeezed from both sides. High performance systems will continue to use similar technology on SSD, but archive systems require more advanced technologies for the future, and the future, as always, is sooner than you think.