Falconstor Community

You are here: FalconStor Blog Optimized Data Services Optimized Data Services Part 3
Error
  • JUser::_load: Unable to load user with id: 79

Optimized Data Services Part 3

Rate this item
(0 votes)

Finally, imagine if the ODS utility model could be implemented while leveraging the same server and storage infrastructure currently in place! That would make rolling out the solution a bit more cost effective and easier to deploy than a complete rip and replace scenario.

There would be no need to purchase proprietary disks or servers, the accumulated knowledge of the existing environment would not be wasted, and the existing service contracts could stay in place, which could reduce costs further while also lowering the learning curve. In order to accommodate such a scenario, the ODS solution would have to provide intelligence in an abstraction layer which integrated with and virtualized the existing storage infrastructure.

 

Using an ODS model, recovery from local failure or a complete disaster should be simple, fast, comprehensive, and cost effective. The solution would provide automation capabilities where possible or at minimum be very simple to use, so that at time of failure, no-one has to scramble to figure out how recovery actually works. The ability to test for DR should be an intrinsic part of the design, and should enable ease of use to the point that following a wizard or script is all operations staff should need to know. Since many applications also include data feeds from other applications, the ability to provide consistency grouping for recovery across platforms and storage tiers is also a requirement for an ODS model. All of the capabilities outlined in all three parts of the Optimized Data Services model are a pretty tall order for any vendor to implement, but if achieved, it would make it real easy for IT staff to stay home hanging out with their friends or family while drinking their favorite alcoholic beverage rather than coming in to work at 3:00 a.m. to recover someone’s stupid mistakes. That being said, the ODS utility should be able to be implemented simply and rapidly and not require weeks or months worth of professional services for it to be installed and tested. In fact, wouldn’t it be cool if all you could dramatically simplify the deployment of the elements that made up the utility by simply taking an industry standard server, like a Sun 4600 or an IBM X3755, or even an HP DL585, putting in some Qlogic HBA cards, and then use a USB stick that included all the software you needed, place the stick in an open USB port on the server, and reboot it. That simple process would make the server a secure node within the utility that performed a specific function such as virtual provisioning, data protection, data replication, or data deduplication. You could use as many servers as you need for your desired performance, and then rack them together to create the Optimized Data Services layer. You have just built your own Data POD, or “Platform for Optimized Data Services”, which would provide the critical data services and abstraction required for the utility. Simply attach the POD to your existing storage network, zone it in, and your done. Attach two PODS together across two sites, turn on the replication function, set your policies, and you have DR. Turn on dedupe, compression and encryption for replication, and only unique, compressed and encrypted data will transpose the connection between the PODS at each location. Since the PODS can abstract any storage array, you can use less expensive storage at the DR site. Since the PODS can also provide thin provisioning over any protocol, you would simply mount a replicated snapshot for DR recovery (perhaps over iSCSI to eliminate the need for a fabric at the DR site) or just perform a rapid “P to V” (Physical to Virtual) conversion of replicated “C:” drives, and bring your application up virtually at the DR site. (SRM integration required here)  You could even create a "mini POD" for your remote locations by using small low-cost servers with internal storage –or even just use the solution within a VMware instance and do it all virtually! Companies looking to optimize their data services and create a more “services oriented” architecture for their applications and data resources, or who are looking at moving to more of a "cloud computing" model should take a hard and critical look at solutions currently available in the market. You dont want to have to tie together software from multiple vendors into a Frankenstein-like science project that would be a support nightmare. Be sure to look for a PLATFORM that provides all the capabilities I have mentioned above, so you can implement simply, quickly, and with peace of mind, knowing that everything is certified, supportable, and can be managed from a single console globally. Everything I have mentioned here is available today, and if you keep reading this Blog from time to time, you will find out that in the very near future, Storage Virtualization as you know it will abruptly change through some unique and disruptive technology about to hit the market, that will make the optimized data services model ubiquitous.

Chris Poelker

Chris Poelker

Chris Poelker is Falconstor's Vice President of Enterprise Solutions and author of Storage Area Networks for Dummies.

Website: www.falconstor.com E-mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
More in this category: « Prev Next »