Permabits and Petabytes

July 24, 2008

Statistical Demons Lurk Everywhere

Filed under: Jered Floyd — jeredfloyd @ 1:57 am

In my last post I wrote about why hash collisions are fundamentally not something to be concerned about in a deduplicating storage system that uses SHA-2 hashes. Now I’ll use the same logic that other vendors use to attack hash-based systems to demonstrate that their systems may corrupt data even more frequently than hash collisions!

Hash based systems are criticized based on the possible occurrence of an incredibly statistically unlikely event, but such analyses are done only in the abstract of “perfect” hardware. Let’s talk about the systems on which data storage is actually done.

NetApp claims that their A-SIS deduplication is more robust because after using a 16-bit hash to find candidate blocks, they do a full binary compare on every possible match. Leaving aside the terrible performance implications of this (more on that later), what do the statistics say here? That is, what are the chances that routine doing the bit-by-bit compare says the data matches when it actually doesn’t?

How could that happen? Simple. The bit-by-bit compare is being done by a microprocessor. Assuming the disks are perfect (they aren’t) and the busses are perfect (they aren’t), the microprocessor’s circuits are still vulnerable to soft errors… bit flips caused by cosmic rays, thermal disruption, current leakage, and other similar sources. These risks are the reasons why ECC is used for system memory, but once you’re in the processor there is almost never ECC or other redundancy. The only systems that have multiple processors running the same software and checking each other are generally flight control systems such as those found on the Space Shuttle. Certainly not those found in storage.

Modeling of today’s semiconductor manufacturing technology indicates that today’s levels of integration have a soft-error rate (SER) around 10 FIT per processor. A FIT (failure-in-time) is equivalent to one error in a billion hours, so 10 FIT is an MTBF of around 11,000 years. That means that in every 11,000 years of operation (or so), you should expect a logic error in a 2008-era processor, that same processor that’s used to do NetApp’s bit-wise compare.

That’s the rate of a logic error somewhere on the chip — once every 11,000 years of operation. But, that error might occur anywhere, perhaps somewhere that wasn’t doing anything important at the time, or perhaps somewhere was was reading or writing data. That might cause data corruption right there, but let’s just focus on deduplication for now.

Let’s assume that there’s only one bit we can flip that will mess up the comparison — the bit indicating the result of the comparison operation. In reality, there’s are lots of points along the logic chain that are vulnerable, but I’ll restrict the scope to this one. Today’s processors have about 50 million transistors, so we can throw in a factor of a 50 million, bringing that MTBF to around 550 billion years of operation.

Now, one more item to factor in; how often we actually care about the result of that one latch. Here I’ll again be very generous, and say that the critical period is only 1 nanosecond — 1 billionth of a second or a single clock cycle on a 1 GHz processor. Only during this period do we cause the comparison to erroneously succeed when a soft error occurs. I’ll also assume we only encounter this critical period once per 4096 byte WAFL block. Assume the system is comparing blocks at the leisurely rate of one per millisecond, or 4 MB/s. (This is a low assumption; to keep up with deduplication via comparison the system will have to run much, much faster.) With a 1 ns critical period per comparison and 1000 comparisons per second, we have 1 microsecond of vulnerable time per second. Based on this, we can add in another factor of 1 million to the MTBF, reaching 550 million billion years of operation.

From this incredibly conservative analysis, it will take 550 million billion years of operation for the bit-by-bit compare to signal a false match, or about once every 2^59 years. By comparison, last week I showed that SHA-512 would take 2^221 years to encounter a hash collision.

Thus, with simple statistics, I’ve just proven that the NetApp bit-by-bit compare is more likely to corrupt data, by a factor of around 2^162, than ever encountering a hash collision. But even that is an extraordinarily unlikely event.

Everything about modern computing is statistics. The things we worry about we design to be so unlikely as to likely never occur. Dwelling on the likelihood of a hash collision, or even of a false comparison, is a waste of time, because it’s missing the forest for the trees… the things that are far more likely to be catastrophic to your business. Statistically, four drives in the same set in your storage system all crashing on the same day is far, far more likely than either of the cases I talk about above, at around 2^48 years.

You should feel completely confident using a deduplicating storage system built around a SHA-2 family hash function. Statistical hash collisions are fundamentally a non-issue. Anyone who says otherwise is just bad at math.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: