Falconstor Community

You are here: FalconStor Blog Storage Virtualization A New Atomic Unit for Backup
  • JUser::_load: Unable to load user with id: 79

A New Atomic Unit for Backup Featured

Rate this item
(15 votes)

During my conversations with people far smarter than me at SNW Fall in Dallas this week, the subject of the future of backup arose. The questions were: what exactly do you mean by the next generation of data protection and what do you mean when you say backup is broken?

Let me address the backup question first. If truth be told, we are performing this mission-critical task in much the same way we did when Cheyenne invented client/server backup. Our compute world has changed, but our approach to data protection has not kept pace. What I mean is that backup is broken due to three significant industry drivers:

1. Explosive data storage growth. The backup window is an endangered species, if not already extinct. Operations demand 24/7 availability and cannot tolerate outages due to backup traffic. This demand for always-available services coupled with the unprecedented growth in data has created a major challenge for all data protection professionals. 

2. Virtualization. Virtualization breaks backup for two main reasons. First, we are consolidating a lot of compute and storage power into a smaller number of servers, which stresses the available I/O. Second, virtualization introduces a new data format that challenges legacy backup solutions and at the same time presents a whole new world of data recovery opportunities.

3. Compressed operating budgets. It has been said that for every $1 spent on technology acquisition $8 are spent to manage it. There is constant pressure on IT to reduce this ratio. 

And what do I mean by next generation data protection?
Simply put, next generation data protection (NGDP) deals with the three issues presented above; more importantly, however, it changes the focus of data protection to more accurately reflect upon the IT operating model. Historically, the atomic unit for backup has been the block or the file. We as an industry have been teaching our customers to think about backup with these components in mind when in reality they are thinking about something much more important: the service. Our customers do not have file- or block-level agreements, FLA and BLA – they have service-level agreements, SLAs. Why do we persist in thinking about backup at the atomic level when it is the entire system that matters?
There is also the issue of confusion around backup vs. disaster recovery vs. high availability. If a company is paying the premium to mirror servers to ensure always-up performance, why don’t we, the vendors, build in backup and disaster recovery? After all, we are already moving the bits from point A to point B – why not also send them to point C, the cloud, and to point D, the tape drive?

Introducing service-oriented data protection
Service-oriented data protection (SODP) is the next logical step in the evolution of data protection, and it requires a basic shift in thinking and technology. The first step, of course, is to start thinking about the service and what it contains. A basic web portal for instance is not simply LUN 32 – it consists of many things, such as an Apache server, a SQL Server database, a content management application, and so on. The web portal service needs to be managed as one atomic unit. It does not help anyone if only 80 percent of the service is protected; it must all be protected. In order to do this, our NGDP software must be application aware – it must know how to copy data from a database or email application in a system- or state-coherent way. In other words, clear snapshots as opposed to blurry ones. When the time stamps for all key components are the same and the data is properly collected, the entire service can be moved or recovered without error or corruption. Helicopters have been described as nothing more than 50,000 parts flying in close formation; and it is the mechanic’s responsibility to keep that formation as tight as possible. In many ways, this is an analog to an IT service; and it is our job to keep the entire service and all of its piece parts flying in tight formation. 
Service orientation coupled with continuous snapshot capabilities addresses our first challenge of growing storage and limited backup windows. A continuous data protection system greatly reduces the amount of data moved; but an even smarter approach moves only the data that is new, thus incorporating a no-duplication approach to data protection. Now we can set a recovery point objective (RPO) for the entire service and not just one element. 
Now what does this mean for our new virtual world? Well, most virtual machines are deployed to deliver a service. When we protect and manage each service as a single entity, we achieve a high level of visibility, control, and agility. We can move entire complex services from one x86 platform to another and deliver on the promise of an agile enterprise. A service in distress can be replicated and published on another system providing seamless failover and failback without the need for identical backup hardware. 

Service-oriented policies
Once we are dealing at the service level, we can begin to implement data backup, retention, and archival rules on a service-by-service basis. Were you ever asked if your DR solution was commensurate with you SLA? Now you can not only answer the question but also deliver the goods. 
In summary, a little perspective can go a long way. By thinking about the solution in the same way that our customers think about their service-delivery challenges, we are one step closer to delivering an operational model that fits in with the bigger picture.

Are we there yet?
Not quite. We at FalconStor have most of the really hard stuff done. Snapshot and replication that is both application aware and recovery centric is a large hurdle and a good start. We have this functionality today. What we are building is the management layer that understands how everything fits together – the covalent bond between the block, database, and server, if you will. This piece also needs to be metadata savvy as well. You can’t begin to think about managing petabytes of data without using metadata services.
This is where we are going. We plan to have the next installment of this vision in the first half of 2011. Isn’t it time we moved to the next generation?

E-mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
More in this category: « Prev Next »