… of the blog, not of your data (in this case). This blog is moving over to our home server at http://blog.permabit.com/, and Tom Cook, our CEO and Mike Ivanov, our VP Marketing, will be joining me in making regular posts.
Please go on over there and update your RSS feeds. You can get a feed of all our posts, or separated by author.
Luckily, migrating data in WordPress is almost as easy as the automatic data migration in Permabit Enterprise Archive, so all the previous content is there for your perusal.
To kick off the new year, we recently released our Storage Predictions for 2009. We’ve received a lot interest in this list since we released it, and I personally have been asked about prediction number 3, “RAID will Hit a Data Dead End”. Allow me to explain. (more…)
At this point there’s been lots of press coverage on the very high failure rates on Seagate’s Barracuda 7200.11 desktop drives. Last Friday, Seagate came clean and admitted this is due to a firmware bug and that the bug affects several other drive families as well. The good news is that the problem doesn’t affect the integrity of the data stored on the drive, but the bad news is that if the bug has already hit you’ll have to send your drive to Seagate for repair. Additionally, the updated firmware is not yet available (update: or, at least, updated firmware that doesn’t make the problem worse). This problem brings up an interesting question, though… where does data protection come into play in situations like these? (more…)
As has been covered extensively over the past two months, President-Elect Obama has announced plans to appoint a National CTO. As a CTO, I was asked recently what my comments and suggestions would be to Obama and this new, federal CTO.
I thought about this for a while, and I think there are a number of things that are critical to the success of a National CTO initiative. First and foremost, I think it’s absolutely necessary that the National CTO candidate have a strong technology background. This sounds like an obvious standard but, as anyone who has held the position knows, the role of a CTO frequently trends more towards marketing than technology. While in business this is important, for the federal government the CTO must be much more than just a persuasive technology evangelist; he or she must have a deep understanding of the technologies being considered and be willing to recommend tools that are best for the problems being solved, not just the ones that sound the best. (more…)
The Consumer Electronics Show is on this week and “green” is big news yet again, with 22 percent of consumers willing to pay more for the label, but even more being skeptical of what that label really means. They’re right to be concerned. (more…)
Happy new year, everyone! It’s now 2009, which means I’ll be writing the wrong date on my checks for another few months at least. We’re celebrating 2009 with a new addition to our family:
Gir, the storage bullmastiff
Over at StorageMojo, Robin comments on the challenges of shared memory controllers with multi-core processors. This is actually something that’s been a big problem for regular software development for a while now, and is especially important in the storage space. (more…)
In the first post of this series, I introduced the concepts of physical versus logical readability and explained how getting back your bits in 100 years is a hard problem, but one with solid product and technology solutions. Last post, I explained why there’s no simple solution to being able to turn those bits back into information, but there are ways through careful planning to avoid the pitfalls.
So how can you solve the logical readability problem? Primarily by following best practices for data format preservation. Some best practices: (more…)
A few things I’ve spotted around the web: Tony Pearson was at the Gartner Data Center Conference last week in Las Vegas; we were there too and it was an absolutely fantastic show. I didn’t get to go, but the reports I have back are that it was full of people who were fanatical about saving money on their storage, not just concerned with where the next steak dinner is at the show.
The best quote that Tony provides is from a lunch talk: (more…)
In my last post in this series I introduced the concepts of physical versus logical readability and explained how getting back your bits in 100 years is a hard problem in itself but is not alone sufficient for a complete archive. Accurately being able to store and retrieve bits — maintaining physical readability — over a long period of time is critical to an archive, as is being able to do so cost effectively, but is not enough. Logical readability, the ability to interpret what those bits mean, must be maintained as well, and this is a much harder problem that cannot be solved by technological means alone.
Modern electronic storage consists of binary data, ones and zeros. The physical encodings are complex and analog in nature and change frequently with advances in technology, but the data represented is always binary. This has not always been the case, as in the analog tapes from the Lunar Orbiter that I wrote about last time, but for fundamental mathematical reasons data is almost certain to be binary representable going forward. Storing and retrieving a bitstream is the physical readability challenge. (more…)
After my post about dirty little secrets a few weeks ago, Joe Martins from Data Mobility Group wrote to point out the real “dirty little secret” about archive systems: even if your archival storage is reliable, it doesn’t mean you can do anything useful with your data once to retrieve it in the distant future.
There’s more to a digital archive than just being able to store and retrieve your bits from media. If your storage system has been designed properly then it will give you your data, but it won’t necessarily give you the information that data represents. For several years I co-chaired the SNIA Long-Term Archive and Compliance Storage Initiative, and this was a problem that we frequently considered. The challenges found when considering how to solve this problem led in part to the development of XAM, the new eXtensible Access Method standard for object-based information storage.
When it comes time to retrieve and process data that was written a long time prior, there are two major challenges — what I like to call physical readability and logical readability. Physical readability means that the archive system is able to retrieve and present the exact bitstream that was originally written, intact, complete, with no errors. Logical readability, on the other hand, means that I am able to extract the same semantic meaning from those bits as when they were originally processed. The first problem is one that can be solved purely by technology; the second one, sadly not. (more…)