Tape backup is still the most prevalent technology in place for dealing with data protection, and array based replication is still the most common method of protection for mission critical applications, but companies are really starting to like the idea of saving money by innovating in how they attack the largest ongoing IT costs, which is still backup and disaster recovery.
In light of the events at the recent VMworld conference, this post should be timely. In my travels lately I have been bumping into a lot of organizations suffering the pain and turmoil of moving from a physical environment at their production datacenter, to a virtual environment for DR. The typical process is to build out a SAN and some more powerful servers at the remote site, make copies of the production servers, bring the copies to the DR site, and then use the VMware converter to create a new VM of the machine. Once the VM is created at the DR site, they typically use a convoluted process to continually inject any changes from the physical servers at the production site to the virtual servers at the DR site.
I wish you could have been with me to see the face of one datacenter manager as I showed him how he could: 1) Eliminate the SAN at the DR site 2) Continually replicate delta versions of production changes, including the C: drive changes to the DR site, using 80% less WAN 3) Leverage VMware RDM and 10GB-E at the DR site to assure application responsiveness when using virtual servers 4) Bring up his SQL server in 10 min. at the DR site from the replicated production data. We actually timed this. The process is fairly simple, and all the tools to create the VM are within VMware. All you need is an efficent mechanism to holistically accelerate protection and replication of the at the production site. I showed this with FalconStor DiskSafe running on the SQL server to efficiently replicate delta version changes on the C: drive to give me an up to the second image of the root volume at the DR site. The data drives were protected by a CDP appliance, which MicroScanned the data to send only updated 512 byte sectors that were changed to another CDP appliance at the DR site.
The CDP applaince at the DR site was then used to provision both the replicated C: drive and the data drives from a consistent shapshot over 10GB-E. Simply mount the replicated C: drive to the DR server, inject the SCSI drivers using the VMware toolkit, and bring up your new SQL VM. Once you are set up, you can down the VM, and any changes to the physical side will be continually replicated to the virtual side automatically.
In light of the events at the recent VMworld conference, this post should be timely. In my travels lately I have been bumping into a lot of organizations suffering the pain and turmoil of moving from a physical environment at their production datacenter, to a virtual environment for DR.
Data deduplication is one of the most obvious choices for reducing overall infrastructure costs within the data center, which also reduces power, cooling, and floorspace requirements for IT. Data deduplication at the file level (unstructured data) can be used to reduce duplicates within production storage.
In order to achieve greater efficiencies for IT investments, most IT models leverage the Information Technology Information Library (ITIL) standards model when implementing solutions within the organization.
With the economy being what it is, and all the consolidation going on in very large US business, the ability to provide complete data mobility is becoming a very interesting topic for many a CIO these days.