During my conversations with people far smarter than me at SNW Fall in Dallas this week, the subject of the future of backup arose. The questions were: what exactly do you mean by the next generation of data protection and what do you mean when you say backup is broken?
Let me address the backup question first. If truth be told, we are performing this mission-critical task in much the same way we did when Cheyenne invented client/server backup. Our compute world has changed, but our approach to data protection has not kept pace. What I mean is that backup is broken due to three significant industry drivers:
1. Explosive data storage growth. The backup window is an endangered species, if not already extinct. Operations demand 24/7 availability and cannot tolerate outages due to backup traffic. This demand for always-available services coupled with the unprecedented growth in data has created a major challenge for all data protection professionals.
2. Virtualization. Virtualization breaks backup for two main reasons. First, we are consolidating a lot of compute and storage power into a smaller number of servers, which stresses the available I/O. Second, virtualization introduces a new data format that challenges legacy backup solutions and at the same time presents a whole new world of data recovery opportunities.
3. Compressed operating budgets. It has been said that for every $1 spent on technology acquisition $8 are spent to manage it. There is constant pressure on IT to reduce this ratio.
Data protection can be interpreted in so many ways today; there is a lack of consensus on where the technology should go and how it should evolve. Where should it apply? And how can we adapt to new technologies such as virtualization or adopt others such as cloud services?
To their credit, the guys at Wikibon started a community project around this same subject, “The Future of Data Protection - How Cloud Computing and Virtualization Change Everything,” and I encourage the participation of all platform, storage, and data protection vendors as well as the end users who consume the technology. This will be my first blog on this topic and attempt to contribute to the discussion.
I don’t think I need to repeat the fact that traditional backup is broken, but I just did! There are three main reasons that lead to this statement: exponential data growth, server virtualization, and unacceptable recovery times. In this post, I’ll expand only on these three reasons, and I will address available technologies and the future of data protection in future blogs.
As regular readers of our blog can attest, we are serious about data protection. This seriousness stems from an understanding that when clients seek data protection and data recovery solutions, it is not just data they are looking to protect. They are making a move to protect their reputations, too. Access to data is mission critical to any business, and any error of judgment in this area can result in the company, clients and investors losing money. Therefore, all steps necessary to avoid these mishaps must be taken as a means of reputation control.
Last year was a significant one for virtualized servers in the government-agency data center. Many such organizations took advantage of the opportunity to make better use of existing budgets by consolidating physical infrastructure and commoditizing server hardware.
What many of those agencies have yet to act on, though, is the other half of the story.
In 2010, these organizations will move to address the data aspects of information technology and the value of storage virtualization.There are real economic savings to be had here for any IT team that recognizes its job did not end with server virtualization. I actually wrote an article in Government Computer News that appeared this month on this very subject:"Why 2010 is the year of the virtual data center."
Storage virtualization is accomplished by creating a virtual abstraction layer between physical or virtual servers and the existing physical storage. Once the storage is virtualized and the physical location of data becomes abstracted from the hosts, some interesting opportunities for savings become apparent:
New Orleans is a city of enticing cuisine, fantastic music, and unbridled nightlife – and for the next week, it is home to the Microsoft TechEd show. I’m excited to be in the Crescent City this week, where we are talking to colleagues in IT about data protection in Microsoft Windows environments – and how FalconStor® CDP can help transform traditional data backup from a patch job into a continuous process that eliminates the backup window and allows for instant recovery.
Recently, in pursuit of OMB's federal data center consolidation initiative, many government agencies have focused on realizing the benefits of virtualization to better leverage existing budgets by consolidating physical infrastructure and commoditizing their server hardware through the use of virtual servers. Software and hardware vendors have been very focused on selling the benefits of server virtualization.What many agencies don't realize is that they are missing the other half of the story -- Storage virtualization, and a huge opportunity to increase overall efficiency and generate substantial savings, while optimizing asset utilization and improving data mobility.
Although "The times, they are a changin" are the lyrics to a real old Bob Dylan song, they sure seem to be prophetic as they pertain to the storage industry today. In just a few short years, we have seen the fall of Fibre Channel and the introduction of SAS storage as the mainstay of SAN storage.
Tape backup is still the most prevalent technology in place for dealing with data protection, and array based replication is still the most common method of protection for mission critical applications, but companies are really starting to like the idea of saving money by innovating in how they attack the largest ongoing IT costs, which is still backup and disaster recovery.