Falconstor Community

You are here: FalconStor Blog Data Backup Where is data protection going? Why is it changing? Chapter one- The broken backup
  • JUser::_load: Unable to load user with id: 79

Where is data protection going? Why is it changing? Chapter one- The broken backup Featured

Rate this item
(5 votes)


Data protection can be interpreted in so many ways today; there is a lack of consensus on where the technology should go and how it should evolve. Where should it apply? And how can we adapt to new technologies such as virtualization or adopt others such as cloud services?

To their credit, the guys at Wikibon started a community project around this same subject, “The Future of Data Protection - How Cloud Computing and Virtualization Change Everything,” and I encourage the participation of all platform, storage, and data protection vendors as well as the end users who consume the technology. This will be my first blog on this topic and attempt to contribute to the discussion.

I don’t think I need to repeat the fact that traditional backup is broken, but I just did! There are three main reasons that lead to this statement: exponential data growth, server virtualization, and unacceptable recovery times. In this post, I’ll expand only on these three reasons, and I will address available technologies and the future of data protection in future blogs.


Let’s start with data growth and why this is a constant in the changing data protection equation. Traditional backup processes are repetitive batch jobs– we do our weekly backups as the base and then save incremental changes during the week to have enough recovery points to go back to in case of data loss or corruption. This process is extremely expensive on server and network resources, making it impossible to execute during business operation hours; this leads us to the “backup window” concept, the time available off business operation hours to perform backups. According to IDC's Digital Universe study published by EMC, annual data growth now exceeds 50 percent, pushed by more digital content produced than ever before. Now this may not be the case for every organization, but data production is by far outpacing the speed of the technology that is used to backup this data. How fast can we improve the backup speed to keep up with data growth? We can obviously keep buying more backup licenses, more servers, and more network connectivity, but that’s just throwing more money at the same broken process.

Another factor exercising more pressure on the backup window is the new way we do business today. 15 years ago, I used to go to the bank a few times a month to deposit a check, pay bills, get some money etc. Well, today I don’t even know who my account manager is! They’d be lucky to see me once a year. I can do everything I need over the internet, and I can do it any time of day or night and from anywhere in the world. The nature of business has changed. There is more automation than ever, and there are more services available online 24/7. We don’t have any downtime any more or off business hours. When I recently moved to New York, I was happy to know that I could file my state taxes online. When I tried to file at 1:00 a.m., the service was not available; and it wasn’t available every day between 1:00 and 5:00 a.m. Now, as a backup guy, I know that they are backing up their systems and that this is their backup window, which is causing a serious service interruption. Perhaps the NY tax office can afford to do so, but other businesses may not!

The second reason that backup is broken is the increased adoption of server virtualization. Traditionally backups relied on the available CPU and networking resources to the server infrastructure to process daily and weekly backups. The resource utilization rate in those physical environments hovered around 20 percent, leaving plenty for the backup process. With the adoption of server virtualization, server density has increased and resource utilization is reaching 70 percent; that leaves much less resources available for the backup process and way more data to backup per physical host. The proliferation of server virtualization has given birth to new companies currently engaged in a race to solve the data protection challenges seen in virtual environments.

Third and most important of all is recovery, the reason we even implement data protection solutions in the first place. Recovery requirements have changed in the past decade, driven by our dependence on electronic transactions for almost all our business processes, and the risk factor associated with service or data loss. Organizations have less tolerance for downtime and the cost associated with it; the cost of downtime today is much more impactful to the business than 10 years ago. So a significant improvement in recovery processes is necessary to match the new data and services availability requirements. A transformation of the old and lengthy restore process into an instant recovery or failover process is needed to maintain business operability.

To be continued

Fadi Albatal

Fadi Albatal

E-mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
More in this category: « Prev Next »