Falconstor Community

You are here: FalconStor Blog Displaying items by tag: data backup
FalconStor Blog
In his recent Computerworld blog, Chris Poelker discusses the ongoing backup war between tape and disk. He highlights the discrepancies between how chief financial officers (CFOs) and IT managers understand backup and disaster recovery (DR). CFOs look to the least expensive method, while IT managers push for the strongest solutions that ensure the smooth operation, recovery and security of the data and applications that keep businesses running. In the end, the CFO tends to win the argument, so it is clear that cost is at the forefront of this decision. Therefore Chris chooses to focus the disk-versus-tape battle on price and performance, rather than on the technical aspects.

In his web research, Chris found that tape tends to be a better option for smaller IT shops, where performance requirements are lower. For those larger organizations with higher performance and capacity requirements, disk-based solutions with deduplication are a better choice. Of course, results can differ due to various vendor prices.

Chris goes on to highlight both sides of the argument, charting the price and performance of the various infrastructures. He makes the case for both tape and disk-based solutions as it relates to the size of the environment. He also goes on to state that tape is not dead, because it can be used for backup in smaller environments or as the archival system for larger environments. To get the full story on the backup debate check out Chris’ post “Tape versus disk: The backup war exposed.”
Published in Data Backup
It is 2013 Business Continuity Awareness Week. From March 18 to 22 the Business Continuity Institute will raise awareness about the importance of business continuity. Disasters can strike at anytime, whether they stem from natural disasters, human errors or malicious attacks. And with tornado season descending upon us, we want to offer some quick tips to prepare your data center.

Tornado season typically runs from March through May, but tornadoes can occur at anytime. According to the National Oceanic and Atmospheric Administration (NOAA), about 1,200 tornadoes strike the United States each year. Below are a few simple tips to keep in mind to prepare your data center this tornado season:
  1. Identify critical servers and resources that need to be available in the event that a tornado wipes out power to the data center.
  2. Create a disaster recovery response team and ensure that every member knows his/her responsibility for validating the availability of key data center resources.
  3. Create a scenario response list that outlines the steps to recovery based on the situation, whether it’s power a outage or a server failure.
  4. Verify and document configurations, including networking, backup, storage and accounts to access all equipment.
  5. Alert any third-party vendors that may have copies of company data (e.g. offsite tape storage vendors, cloud providers) because that data may need to be sent to the DR site.
  6. Have a backup plan that provides some automation. Automation is key because manual steps completed by staff often fall victim to human error.
These steps will help you prepare your data center for any type of disaster whether it be for the upcoming tornado season or any type of natural storm that may threaten your location. The best point to remember is not to wait until a storm is threatening your company, but to be proactive and have a tested plan in place to handle any issue whether it is natural or caused by simple human error.
Published in Disaster Recovery


FalconStor offers another look into the mind of its consumers with its latest customer success video. In this five minute interview, Strand Associates’ Justin Bell, network engineer, shares how FalconStor CDP technology allowed this large engineering firm to shrink its recovery time objectives (RTOs) and recovery point objectives (RPOs), ensuring data availability and business continuity.

Strand Associates, a multidisciplinary engineering firm, has offices throughout the Midwest United States. Its IT infrastructure consists of Microsoft Windows servers (2008 R2, 2008, and 2003) spread across nine office locations. Users in each of these offices need to access files on various servers from other offices at any given time.
Published in Data Backup
Wednesday, 28 March 2012 20:14

Backup is Old School

As Wikipedia notes, full backups had been the traditional approach to protecting large data sets, but the problem is that, in today’s high data growth and demanding 24x7 environment, full or even incremental backups take time that is just not available. Multi-tasking or multi-user systems will constantly be trying to send writes to data that is being backed up.

The traditional approach to this problem is to temporarily disable write access to data during the backup, by quiescing the application or
by having the operating system enforce exclusive read access. This works when regular downtime is acceptable, but 24/7 systems cannot bear service stoppages. To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only and use a system of pointers to reference the initial snapshot.

Published in Data Backup

When it comes to backup and recovery, data center managers are fighting battles on several fronts. First, they face exploding data growth – increased numbers of data volumes, larger data volumes and increased numbers of servers – even as backup windows shrink (assuming there is a backup window anymore). Second, IT has to keep costs down despite expanding protection license fees, increased capacity expenses, inefficient utilization and other factors. This is the classic case of doing more with less. Finally, organizations have a data assurance burden – their teams lack confidence in the integrity of backup data. They have been burned too many times in the past when they needed to access backup data only to find out it was not usable or incomplete.

Poor Juliet Capulet.  She asked, “What’s in a name?” And she answered her own question, “that which we call a rose by any other name would smell as sweet.”  But she was a foolish teenager, a girl who made lots of wrong assumptions on her way toward fulfilling her destiny as a “star-crossed lover.”  Pardon the Shakespeare tangent away from technology; I did take some liberal arts classes in undergrad. (Way back then, Engineering, Math and Physics classes averaged 0.5068 girls per class)
A name, of course, matters quite a bit.  The term we use to describe anything – a person, a phenomenon, a technology – influences our understanding of that item’s purpose and significance.  This rings true in the data center, as well, where the tendency to truncate “backup and recovery” to just “backup” can have some negative and self-fulfilling results.

Published in Disaster Recovery
RSS