In his web research, Chris found that tape tends to be a better option for smaller IT shops, where performance requirements are lower. For those larger organizations with higher performance and capacity requirements, disk-based solutions with deduplication are a better choice. Of course, results can differ due to various vendor prices.
Chris goes on to highlight both sides of the argument, charting the price and performance of the various infrastructures. He makes the case for both tape and disk-based solutions as it relates to the size of the environment. He also goes on to state that tape is not dead, because it can be used for backup in smaller environments or as the archival system for larger environments. To get the full story on the backup debate check out Chris’ post “Tape versus disk: The backup war exposed.”
Tornado season typically runs from March through May, but tornadoes can occur at anytime. According to the National Oceanic and Atmospheric Administration (NOAA), about 1,200 tornadoes strike the United States each year. Below are a few simple tips to keep in mind to prepare your data center this tornado season:
- Identify critical servers and resources that need to be available in the event that a tornado wipes out power to the data center.
- Create a disaster recovery response team and ensure that every member knows his/her responsibility for validating the availability of key data center resources.
- Create a scenario response list that outlines the steps to recovery based on the situation, whether it’s power a outage or a server failure.
- Verify and document configurations, including networking, backup, storage and accounts to access all equipment.
- Alert any third-party vendors that may have copies of company data (e.g. offsite tape storage vendors, cloud providers) because that data may need to be sent to the DR site.
- Have a backup plan that provides some automation. Automation is key because manual steps completed by staff often fall victim to human error.
FalconStor offers another look into the mind of its consumers with its latest customer success video. In this five minute interview, Strand Associates’ Justin Bell, network engineer, shares how FalconStor CDP technology allowed this large engineering firm to shrink its recovery time objectives (RTOs) and recovery point objectives (RPOs), ensuring data availability and business continuity.
Strand Associates, a multidisciplinary engineering firm, has offices throughout the Midwest United States. Its IT infrastructure consists of Microsoft Windows servers (2008 R2, 2008, and 2003) spread across nine office locations. Users in each of these offices need to access files on various servers from other offices at any given time.
The traditional approach to this problem is to temporarily disable write access to data during the backup, by quiescing the application or
by having the operating system enforce exclusive read access. This works when regular downtime is acceptable, but 24/7 systems cannot bear service stoppages. To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only and use a system of pointers to reference the initial snapshot.
When it comes to backup and recovery, data center managers are fighting battles on several fronts. First, they face exploding data growth – increased numbers of data volumes, larger data volumes and increased numbers of servers – even as backup windows shrink (assuming there is a backup window anymore). Second, IT has to keep costs down despite expanding protection license fees, increased capacity expenses, inefficient utilization and other factors. This is the classic case of doing more with less. Finally, organizations have a data assurance burden – their teams lack confidence in the integrity of backup data. They have been burned too many times in the past when they needed to access backup data only to find out it was not usable or incomplete.
Poor Juliet Capulet. She asked, “What’s in a name?” And she answered her own question, “that which we call a rose by any other name would smell as sweet.” But she was a foolish teenager, a girl who made lots of wrong assumptions on her way toward fulfilling her destiny as a “star-crossed lover.” Pardon the Shakespeare tangent away from technology; I did take some liberal arts classes in undergrad. (Way back then, Engineering, Math and Physics classes averaged 0.5068 girls per class)
A name, of course, matters quite a bit. The term we use to describe anything – a person, a phenomenon, a technology – influences our understanding of that item’s purpose and significance. This rings true in the data center, as well, where the tendency to truncate “backup and recovery” to just “backup” can have some negative and self-fulfilling results.