Falconstor Community

You are here: FalconStor Blog Storage Utility Using Virtualization and the Optimized Data Services model to reduce costs and speed recovery
Error
  • JUser::_load: Unable to load user with id: 79

Using Virtualization and the Optimized Data Services model to reduce costs and speed recovery

Rate this item
(0 votes)

In order to achieve greater efficiencies for IT investments, most IT models leverage the Information Technology Information Library (ITIL) standards model when implementing solutions within the organization.

Although standardization may provide some substantial operational benefits, the reliance on traditional methodologies for critical data services are not meeting the more stringent financial and budgetary objectives we are trying to achieve in a down market. The optimized data services model is a shift away from traditional approaches, embraces recent advancements in server and storage virtualization, including data de-duplication, continuous data protection, WAN optimization, and thin provisioning to enable a paradigm shift in the datacenter to a more cost effective and optimized approach to IT.

 

What are Data Services? Data services are the four aspects of what an IT department does to enable the applications running the business:

Storage Provisioning Data Protection Data Replication Application Recovery

Existing models in IT are broken down into the following functional layers: Compute Layer: This layer consists of the IP network for client connectivity, the server infrastructure, operating systems, the applications themselves, and any host based software, including path failover drivers and clustering software. Storage Network Layer: This layer contains all the storage interconnect and communication equipment, including switches, cable plant, storage wide area network equipment, and switch based functionality for security and zoning. Storage layer: This layer comprises all the underlying physical storage devices, including disk, tape, storage arrays, and firmware within the storage devices for RAID and high availability. Since existing models are typically implemented using a standards approach to reduce complexity, there is a heavy reliance on technology from only a few specific vendors at each of these layers.  In order to introduce further cost reduction and increase business flexibility and operational efficiency into the existing environment, and introduce more optimization into your storage infrastructure, it is critical to take advantage of recent advances in technical innovation.  The ODS model enhances existing models by combining the best of the traditional ITIL based standards with innovation to achieve a utility approach for “Optimized Data Services” An Optimized Data Services (ODS) utility is achieved through virtualization and physical abstraction in order to optimize data movement between compute and storage elements. Once virtualized, the ODS platform will enable the creation of policies which enforce specific service levels for explicit or pooled datasets. The grouping of data elements for consistency or recovery purposes is not hampered by physical constraints (i.e.: LUNS in the same array, or SAN vs. non-SAN or storage-network-attached devices or hosts) The overarching goal of the ODS model is to optimize and make more cost effective and operationally efficient (ie: simple) the ability to provision, protect, migrate, de-dupe, encrypt, replicate, recover, and archive any data source to any application in real-time via policy. The questions IT managers need to ask themselves to assure the solutions being chosen and implemented conform to the ODS model are:

Does the solution simplify operations Can we use the same solution across all platforms and applications Does the solution leverage existing assets Can we leverage current policies and procedures Can we implement it based on the savings it provides rather than relying on new budget

An amazing opportunity for IT currently exists to reduce costs by adding new innovative technologies to existing storage services.  The inclusion of continuous protection and replication (CDP / CDR), data de-duplication, and virtualization will enable datacenter administrators and CIO’s to offer an alternative data services menu in conjunction with the existing solutions. The goal would be to provide a method of protection and recovery which offers enhanced service levels and leverages existing hardware to improve overall return on existing assets.Existing operations can continue as normal in conjunction with the additional technologies being implemented. In other words, the ODS model should be implemented in phases as an augmentation rather than a replacement of existing practices, and used as an alternative offering on the menu of services provided for IT consumers.So what are the advantages of optimizing data services?  An example is how the ODS model changes the physics of backup and recovery: ·         Since CDP simply continually captures and stores all updates to an “off-frame” copy, and then snapshots the data in conjunction with the application, the entire backup process, and all associated costs are eliminated. Backup is no longer a process of moving data from point A to point B, so the actual physics of the traditional backup process are eliminated, and your RPO (recovery point objective) is now the last known good write. ·         Since recovery is now a simple matter of mounting a snapshot, all data movement for recovery goes away. This in turn enhances the RTO (recovery time objective) to 15 min. or less for any size application. ·         The off-frame copy is used to continually replicate data for DR over IP, which: 1.     Eliminates the need for Array based replication licenses 2.     Enables the use of any storage at the DR site 3.     Eliminates the expense of SAN extension gear 4.     Provides an immediately mountable consistent copy for DR Data deduplication is a great solution for reducing the amount of WAN bandwidth required for DR replication, but since the process of reconstituting the datasets at the other side takes time and can negatively affect recovery time, deduplication of primary applications is usually not done. The ODS model changes the physics of this issue by implementing sub block level delta versioning to eliminate white space and duplicated updates. The process is called microscanning, and has shown to reduce the amount of data needing to be transferred by up to 70:1. The result is getting the value of data deduplication for primary applications without the process of re-hydration of the data at the other side. All of the benefits, none of the pain. This simple fact alone can enable business to leverage their existing bandwidth to provide DR recovery for many more applications than previously possible. These are just a few of the ways the ODS model can change the physics of data protection and recovery, while at the same time enabling greater use of existing assets to reduce current costs.

Chris Poelker

Chris Poelker

Chris Poelker is Falconstor's Vice President of Enterprise Solutions and author of Storage Area Networks for Dummies.

Website: www.falconstor.com E-mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
More in this category: Next »