Competitive claim #3:
FalconStor CDP creates complex disk management issues. For every server protected, an individual journal and snapshot space must be created and sized appropriately. Also, as FalconStor CDP captures every data change, the journal space is particularly subject to growth issues from sudden spikes in application utilization.
Once installed, FalconStor CDP automates everything based on the set policies to meet your business requirements. Journal and snapshot space sizing and expansion occur via your original policy. FalconStor CDP also virtualizes any vendor disk into a pool to be shared and automatically expanded. EMC RecoverPoint cannot do this, and NetApp does not offer a true CDP* solution.
Competitive claim #4:
FalconStor CDP creates disk I/O issues as the journal area must be periodically “flushed,” where data is written from the defined journal space down to the snapshot image, impacting system performance. This problem can be negated with NetApp’s Data ONTAP file system as it never moves data blocks.
FalconStor CDP version 7 includes technology called SafeCache and HotZone to provide extreme performance even when journaling every I/O operation. The FalconStor CDP journal functions like a FIFO (first in first out) buffer. We provide policy-based administration to automate the flushing process to occur only during low I/O activity periods, thus alleviating any problems. The FalconStor CDP journal will be auto expandable in the next version. We believe the ability to provide zero data loss through continuous protection of every write, even when your application is writing thousands of I/O writes per second (IOPS), is more important than any perceived journal management issues. Consider a banking or stock trading application where billions of dollars can be moved within any single transaction. Snapshots don’t help you when the transaction you need to restore occurred 45 milliseconds ago; what you need is a solution that can recover at the sub second level without losing any data. FalconStor’s solution can. Stay tuned for more blogs in this series, as we continue our efforts to clear up misrepresentations about our suite of data protection solutions.
*True CDP is achieved when the solution captures all data as it is written to provide recovery to any point in time.
Competitive claim #1:
FalconStor CDP provides inconsistent data, which can be a problem when recovering a CDP-based database image, as there is no ability to freeze data and capture a consistent image of an application. The benefit of continuous data protection (CDP) to capture every write is not needed unless the user is a financial institution. Realistically, most users use “near CDP” mode to capture data at regular intervals.
FalconStor CDP moves backup from the traditional bulk data movement to a service-oriented data protection model with recovery point objectives (RPO) of zero data loss. This is not true of all so-called CDP products, which fall into two camps:
- Near CDP (EMC RecoverPoint, etc.): The solution captures data in near real-time and provides multiple recovery points in time.
- True CDP (FalconStor CDP): The solution captures ALL data as it is written to provide recovery to ANY point in time.
This level of recovery granularity at the disk level typically negates the need for database agents on the server. The great part is that FalconStor CDP also provides thousands of snapshots in conjunction with true continuous journaling for consistent local and disaster recovery. We also provide intelligent agents when required by the database vendor to create true application-level integrated recovery points, including check-pointing, hot backup mode integration with Oracle, SQL integration, Oracle RMAN integration, and more.
Competitive claim #2:
FalconStor CDP consumes too many storage resources, as data quickly accumulates and each snapshot has to be stored prior to the deduplication process.
FalconStor CDP recovery is a natural and more efficient process for database administrators, and it allows for transaction logs to be expanded to any size. FalconStor CDP captures all the data needed to recover a database consistently from any point in time with zero data loss. You can tune it based on your business needs. FalconStor’s MicroScan technology also enables consistent recovery with write-order fidelity for replicated data across locations, while reducing WAN bandwidth requirements by up to 90 percent or more, without the need to dedupe the data. All replicated data stays in its natural state for instant recoverability. MicroScan technology also provides intelligent recovery across slow links, where very large databases can be recovered in seconds by shipping only the required disk sectors needed to make the database whole again. MicroScan technology is patented by FalconStor, so it is not available from any other vendor.
Here at FalconStor, we are tirelessly working on a new generation of data protection solutions. In the meantime, we’ll be clearing up the FUD around our suite of data protection solutions in a series of blogs. In part two of this blog, I will address two more competitive claims about FalconStor CDP that are dead wrong. Stay tuned.
In his review of CDP then and now in Storage Magazine this month, W. Curtis Preston writes that CDP then was kind of like Star Trek: a great idea too far ahead of its time. The article, “Continuous data protection; it's back!”, details the industry’s first pass at CDP. Dominated by start-ups and unproven technologies, the CDP field of old left enterprises unimpressed.
Oh, how times do change.
Recently, I joined Craig Peterson on his weekly podcast, Tech Talk with Craig Peterson, to discuss the ever-changing world of disaster recovery, disk storage and data protection.
My time on the show allowed me to reflect on the dramatic changes we’ve seen in data protection and data recovery since they became hot button issues thirty years ago. Data protection used to involve backing up data on tape drives, which would then be shipped off-site for storage, on a nightly basis. If something were to happen on-site where data needed to be recovered, users would be limited to a snapshot of what happened yesterday. They would lose any data created since the previous day’s data had been stored the night before.
In today’s fast-paced world, this method of data protection and recovery would not suffice as organizations, especially small-to-medium businesses, cannot afford to lose any data. Luckily, advancements in technology have provided us with more effective and efficient means of storing and recovering data.
As our FalconStor blog surfers know, one of the topics we frequently discuss is continuous data protection (CDP), which is a process that allows users to restore data to any point in time, as computer data is constantly backed up by trickling all data changes made to an off-site, virtual location. This data is fully protected in the event of an IT issue on-site, and is immediately available for recall.
Even with its steep cost, CDP adds a great deal of value to a user or an organization. For business, it gives management the peace of mind the data surrounding their transactions will always be there. As I noted in the podcast, this is especially important given that recent studies show nine out of 10 businesses who cannot recover data after 10 days go out of business. For hospitals and the healthcare industry, having 24/7 date data can literally mean the difference between life and death. For college students, CDP gives them a backup copy of their final project if their laptop hard drive is erased the morning it is due.
So what’s the catch? Data protection has come a long way, and we at FalconStor are proud to be offering our resellers and customers a solution that helps them though the new age of data protection. Innovative data protection and recovery processes have given users the assurance they need regarding the safety and ability to recall data, resulting in a paradigm shift from a data backup mindset to one of data recovery.
Can you tell us about your recent stories of migrating from tape to CDP?
As IT and business managers are well aware, data replication can quickly increase bandwidth and storage costs if you aren’t careful. Wide area network (WAN) optimization can help you reduce the amount of bandwidth and storage capacity used by data replication, enabling cost savings while maintaining efficient application performance.
Remote replication can also improve the backup process and shrink the backup window to virtually nothing. We see many organizations taking snapshots of their data, replicating it to a remote location and completing the backup-to-tape process from the remote site. The primary site, and associated business applications, are effectively insulated from any backup process disruption – eliminating downtime, and the backup window.
If your organization is still using tape backup, you have a relatively reliable and affordable data protection method – especially for long-term protection. However, as your data volumes grow and your dependence on data availability increases, the limitations of tape media will become severe.
Perhaps you are already struggling with this and considering the pros and cons of moving away from tape-based backup.