Falconstor Community

You are here: FalconStor Blog Displaying items by tag: backup
FalconStor Blog
It’s true… smaller amounts of data are just more manageble, easier to store and deal with. Sometimes it’s not until things hit critical mass that we see the light. To help guide you to the light, we recently teamed up with International Data Group (IDG) Research to conduct a survey about backup and optimizing business data. We polled information technology (IT) managers at compainies of various sizes and from different industries. What we found out was not surprising. The majority of these businesses had already deployed solutions to deal with the expansive rise of digital information. We also saw that companies with the biggest data explosion and relatively deeper pockets have adopted the technology and paved the way for the rest of the market, which is growing rapidly. The other interesting thing was that the majority of respondents had already achieved faster backups, quicker recovery, and improved efficiency all around. Made sense… but why haven’t more people adopted? This is the question that keeps me up at night. See the breakdown in the infographic, or, if you have more time, get the full report.

 

 

Published in Data Deduplication

FalconStor® Continuous Data Protector (CDP) with RecoverTrac™ technology has been chosen as a finalist in the backup and disaster recovery software and services category in Storage magazine’s and SearchStorage.com’s 2011 Products of the Year competition. This category covers backup and recovery software, cloud backup and recovery services, disaster recovery (DR), snapshot and replication, electronic vaulting, and archivers. FalconStor CDP was chosen for its unified backup and DR, local and remote data recovery, and its ability to provide automatic service-oriented recovery with RecoverTrac. The RecoverTrac tool simplifies and automates complex, time-consuming, and error-prone failover and failback operations of systems, applications, services, and entire datacenters, making FalconStor CDP the most comprehensive disk-based data protection system for backup and DR available.

Published in FalconStor

If you picked up this month’s issue of Storage magazine, you likely noticed the article by Jacob Gsoedl titled, “Blueprint for cloud-based disaster recovery” (page 21). You also might have noticed my quote in the piece, which detailed different options for disaster recovery (DR) in the cloud (page 27).

Gsoedl notes that there are lots of different ways to do DR in the cloud. He ably discusses the pros and cons of managed applications and managed DR, back up to and restore from the cloud, replication to virtual machines in the cloud and back up to and restore to the cloud.

For that final option, I told Gsoedl that, “several cloud service providers use our products for secure deduped replication and to bring servers up virtually in the cloud.” I’d love to expand on that statement, if I may.

As the article explains, these recommendations all offer attractive elements for companies, depending on their needs, resources and recovery time objective (RTO) and recovery point objective (RPO) requirements. What it didn’t get to, however, is that the huge opportunity in cloud-based DR, regardless of the specific method employed, is to change the back up paradigm itself.

The cloud expands the opportunity to stop talking about just protecting specific files or data blocks and start talking about service-oriented data protection (SODP). This is what matters to enterprises, of course. Beyond protecting bits and bytes, the cloud needs to help organizations deliver better service to users.

That’s what FalconStor data protection is about. Our tools deliver cloud-based backup and DR designed with SODP in mind, and any blueprint for cloud-based disaster recovery must have service embedded in its foundation.

Thursday, 13 January 2011 14:07

Replication is NOT Disaster Recovery

Most storage vendors offer some type of volume copy functionality, either in-system or remote replication.  These copy functions are commonly promoted as business continuity (in-system copy) or disaster recovery ( remote replication).  Replication, or transporting data from one location to another, is analogous to household moving companies.  They transport your entire household, in many boxes, to your new house.  However, once all your household belongings are safely at your new home, you still have the complex and laborious task of unpacking and arranging things to make your new home functional.  Isn’t this effectively what remote data replication does?  Sure, all your data is safely at your remote data center, but now what?  Where's the 'Recovery' aspect of this paradigm?

Published in Disaster Recovery
Thursday, 30 December 2010 17:40

Backup is Broken - a cost perspective

I’m often amazed by the things I learn talking to customers.  This technology space eventually comes down to money   –  making it, saving it or wasting it.  One example of this last aspect, wasting it, is something I learned about from talking to a customer recently.

Published in Disaster Recovery
Tuesday, 02 November 2010 17:08

Breaking the Barriers to Virtualization

 

Now that server virtualization technologies have been proven in many environments, more people are looking at virtualization to improve the efficiency of their primary workloads in the data center. Despite the realized benefits from virtualizing non-mission-critical applications, two questions remain on the minds of IT professionals. One, since traditional backup doesn’t work in virtual environments, how can I effectively protect virtualized workloads? We are talking mission-critical applications here! Two, I know how I reduced my server infrastructure with virtualization, but I also know how my storage cost went way up as a result. So how can I reduce my storage costs while implementing server virtualization?

In a recent report from ESG on the “Impact of Server Virtualization on Data Protection,” when asked about top server virtualization initiatives for 2010, most respondents placed backup, recovery, and replication right after virtualizing more workloads. It is very well understood that server virtualization breaks traditional backup processes. The consolidation of servers and workloads is leaving very little resources for backup applications to perform data copies. In virtual server environments, CPU utilization climbs to more than 60 to 70 percent, up from an average of 20 percent in physical environments, leaving very little for the most demanding job of them all, backup. In addition network resource utilization is increased to such a degree that very little bandwidth remains for massive data transfers required by backup operations.

Published in Data Backup

 

Tape! Well, I talked about broken backup in my last post, but I didn’t mention tape as one of the reasons that backup is broken. Tape remains a form of media that has it applications. There is no way will you will hear me saying that “tape sucks” – I’d even say that any one claiming that has no idea where tape fits into the enterprise, is clueless, or at best “sucks at tape” and doesn’t know how to use it.

Tape has served us beyond its primary mission in the traditional sense of backup. At the risk of stating the obvious, tape was the target and the source of recovery; but it also became the means for data mobility and, in many cases, the transport layer for data migration. Tape offered an encapsulation for data, systems, and entire workloads that was not possible otherwise. And, beyond that, it was the destination of archived data.

Published in Data Backup
Thursday, 14 October 2010 17:59

A New Atomic Unit for Backup

During my conversations with people far smarter than me at SNW Fall in Dallas this week, the subject of the future of backup arose. The questions were: what exactly do you mean by the next generation of data protection and what do you mean when you say backup is broken?

Let me address the backup question first. If truth be told, we are performing this mission-critical task in much the same way we did when Cheyenne invented client/server backup. Our compute world has changed, but our approach to data protection has not kept pace. What I mean is that backup is broken due to three significant industry drivers:

1. Explosive data storage growth. The backup window is an endangered species, if not already extinct. Operations demand 24/7 availability and cannot tolerate outages due to backup traffic. This demand for always-available services coupled with the unprecedented growth in data has created a major challenge for all data protection professionals. 

2. Virtualization. Virtualization breaks backup for two main reasons. First, we are consolidating a lot of compute and storage power into a smaller number of servers, which stresses the available I/O. Second, virtualization introduces a new data format that challenges legacy backup solutions and at the same time presents a whole new world of data recovery opportunities.

3. Compressed operating budgets. It has been said that for every $1 spent on technology acquisition $8 are spent to manage it. There is constant pressure on IT to reduce this ratio. 

Published in Storage Virtualization

 

Data protection can be interpreted in so many ways today; there is a lack of consensus on where the technology should go and how it should evolve. Where should it apply? And how can we adapt to new technologies such as virtualization or adopt others such as cloud services?

To their credit, the guys at Wikibon started a community project around this same subject, “The Future of Data Protection - How Cloud Computing and Virtualization Change Everything,” and I encourage the participation of all platform, storage, and data protection vendors as well as the end users who consume the technology. This will be my first blog on this topic and attempt to contribute to the discussion.

I don’t think I need to repeat the fact that traditional backup is broken, but I just did! There are three main reasons that lead to this statement: exponential data growth, server virtualization, and unacceptable recovery times. In this post, I’ll expand only on these three reasons, and I will address available technologies and the future of data protection in future blogs.

Published in Data Backup
Wednesday, 08 September 2010 20:31

Thank you, Curtis – the king is indeed naked!

 

Yesterday Searchstorage.com published a Q&A with Curtis Preston, widely known as Mr. Backup. The topic was around the use of snapshot technology for data backups – basically a look at the increased use of SAN-based technology as an alternative or complement to traditional backup solutions.

Curtis brings some great points to the discussion and argues that, if you want better backup, snapshots is the way to go. I tend to agree with him. I’ll try to answer some of the questions that he leaves open, since the question of backup transformation is probably broader than I can cover in one or even a few blogs. The poignant fact that Curtis points out at the end of the Q&A is this: “…the backup and recovery space moves at a glacial pace. Backup people, by nature, are paranoid.” Therefore, I think it’s up to us as a collective – vendors, subject matter experts, and industry analysts in conjunction with end users – to redefine that space.

Published in Data Backup
  • «
  •  Start 
  •  Prev 
  •  1 
  •  2 
  •  Next 
  •  End 
  • »
Page 1 of 2
RSS