What are Data Services? Data services are the four aspects of what an IT department does to enable the applications running the business:
Storage Provisioning Data Protection Data Replication Application Recovery
Existing models in IT are broken down into the following functional layers: Compute Layer: This layer consists of the IP network for client connectivity, the server infrastructure, operating systems, the applications themselves, and any host based software, including path failover drivers and clustering software. Storage Network Layer: This layer contains all the storage interconnect and communication equipment, including switches, cable plant, storage wide area network equipment, and switch based functionality for security and zoning. Storage layer: This layer comprises all the underlying physical storage devices, including disk, tape, storage arrays, and firmware within the storage devices for RAID and high availability. Since existing models are typically implemented using a standards approach to reduce complexity, there is a heavy reliance on technology from only a few specific vendors at each of these layers. In order to introduce further cost reduction and increase business flexibility and operational efficiency into the existing environment, and introduce more optimization into your storage infrastructure, it is critical to take advantage of recent advances in technical innovation. The ODS model enhances existing models by combining the best of the traditional ITIL based standards with innovation to achieve a utility approach for “Optimized Data Services” An Optimized Data Services (ODS) utility is achieved through virtualization and physical abstraction in order to optimize data movement between compute and storage elements. Once virtualized, the ODS platform will enable the creation of policies which enforce specific service levels for explicit or pooled datasets. The grouping of data elements for consistency or recovery purposes is not hampered by physical constraints (i.e.: LUNS in the same array, or SAN vs. non-SAN or storage-network-attached devices or hosts) The overarching goal of the ODS model is to optimize and make more cost effective and operationally efficient (ie: simple) the ability to provision, protect, migrate, de-dupe, encrypt, replicate, recover, and archive any data source to any application in real-time via policy. The questions IT managers need to ask themselves to assure the solutions being chosen and implemented conform to the ODS model are:
Does the solution simplify operations Can we use the same solution across all platforms and applications Does the solution leverage existing assets Can we leverage current policies and procedures Can we implement it based on the savings it provides rather than relying on new budget
An amazing opportunity for IT currently exists to reduce costs by adding new innovative technologies to existing storage services. The inclusion of continuous protection and replication (CDP / CDR), data de-duplication, and virtualization will enable datacenter administrators and CIO’s to offer an alternative data services menu in conjunction with the existing solutions. The goal would be to provide a method of protection and recovery which offers enhanced service levels and leverages existing hardware to improve overall return on existing assets.Existing operations can continue as normal in conjunction with the additional technologies being implemented. In other words, the ODS model should be implemented in phases as an augmentation rather than a replacement of existing practices, and used as an alternative offering on the menu of services provided for IT consumers.So what are the advantages of optimizing data services? An example is how the ODS model changes the physics of backup and recovery: · Since CDP simply continually captures and stores all updates to an “off-frame” copy, and then snapshots the data in conjunction with the application, the entire backup process, and all associated costs are eliminated. Backup is no longer a process of moving data from point A to point B, so the actual physics of the traditional backup process are eliminated, and your RPO (recovery point objective) is now the last known good write. · Since recovery is now a simple matter of mounting a snapshot, all data movement for recovery goes away. This in turn enhances the RTO (recovery time objective) to 15 min. or less for any size application. · The off-frame copy is used to continually replicate data for DR over IP, which: 1. Eliminates the need for Array based replication licenses 2. Enables the use of any storage at the DR site 3. Eliminates the expense of SAN extension gear 4. Provides an immediately mountable consistent copy for DR Data deduplication is a great solution for reducing the amount of WAN bandwidth required for DR replication, but since the process of reconstituting the datasets at the other side takes time and can negatively affect recovery time, deduplication of primary applications is usually not done. The ODS model changes the physics of this issue by implementing sub block level delta versioning to eliminate white space and duplicated updates. The process is called microscanning, and has shown to reduce the amount of data needing to be transferred by up to 70:1. The result is getting the value of data deduplication for primary applications without the process of re-hydration of the data at the other side. All of the benefits, none of the pain. This simple fact alone can enable business to leverage their existing bandwidth to provide DR recovery for many more applications than previously possible. These are just a few of the ways the ODS model can change the physics of data protection and recovery, while at the same time enabling greater use of existing assets to reduce current costs.