Falconstor Community

You are here: FalconStor Blog

FalconStor Blog

Rate this item
(1 Vote)
In my previous blog, Key Criteria for Evaluating Deduplication Solutions, I discussed the most important elements to consider when selecting a deduplication system: flexibility, global deduplication, scalability, performance, high availability, security, and tape environment compatibility.

Now, let’s see how FalconStor deduplication solutions address each of these important decision elements.

FlexibilityFalconStor® Virtual Tape Library (VTL) allows backup administrators to fully align deduplication policies with business goals by letting them choose the method that best meets their specific requirements.
  • Inline deduplication has the primary benefit of minimizing storage requirements.
  • Post-process deduplication is ideal when the key goal is to back up as quickly as possible or to create off-site tape copies when replication is not available.
  • Concurrent deduplication is similar to post-processing but starts as soon as the first set of data has been written and runs concurrently with backup. This is highly suitable for clustered VTL environments. Because replication starts sooner, data can be quickly recovered from a remote site.
  • No deduplication can be used on data that does not deduplicate effectively or on data that is exported to physical tape. Examples include image files and pre-compressed or encrypted data. Selectively “turning off” deduplication saves deduplication cycles and focuses on deduplicating data that yields the highest ratios and value.
Rate this item
(0 votes)
As you would expect from VMware, last week at VMworld Barcelona they discussed the topic of storage virtualization and the virtualized data center. When you examine the numerous steps that VMware has taken to create an interface between their hypervisor and storage devices with vStorage API for Array Integration (VAAI) and vStorage APIs for Storage Awareness (VASA), one can only assume there will be further integration down the road.

Not all of the response to VMware’s storage virtualization initiative was received positively. There are some IT managers and engineers that see VMware, in their quest to expand into other markets, as presenting customers with solutions that are not fully baked. You can read more about the perceived response to VMware’s initiative here.

Here at FalconStor, we welcome the move to a more unified virtual experience, as it allows VMware users to take advantage of FalconStor’s simple yet flexible approach to storage virtualization, storage migration, replication, and data protection. In addition, this move can deliver tremendous value to customers as they strive to bring efficiency in the data center. Imagine how easy it will be for customers to introduce VMware into their environment and use existing storage to deploy their virtual infrastructure. As the environment grows, a customer can introduce intelligent storage technology, such as that offered by FalconStor, into the data center. Now not only can a customer use heterogeneous storage but also they can mix and match storage protocols while taking advantage of such technologies as replication, snapshots, data protection, and disaster recovery.

Just for the record, VMware is not the only company spreading the virtual data center message. Arch rival Microsoft has stepped up their efforts to show that they also provide an alternative (and more cost effective) offering for those looking to virtualize the data center. With the release of Windows Server 2012 and System Center 2012, Microsoft now offers a comparable virtualization solution providing similar storage device integration. Be on the lookout for more from FalconStor on how we provide storage, replication, and business continuity to Hyper-V environments.

Ultimately why this is an important topic to keep track of is that it will have a direct effect on the storage and virtualization market. Customers will have more options to choose from to use virtualization in their data center, which makes virtualization more cost-effective. Storage vendors will have a means to further integrate their technologies into to the virtual workload, allowing for better management and higher availability of virtual machines. We will continue to follow the virtual datacenter story and highlight the key topics as well as blog about how some other vendors are faring in the race for virtualization domination, so stay tuned!
Rate this item
(1 Vote)
With information growing 50 to 60 percent annually, the cost of managing and protecting data increases continuously but IT budgets are limited. Data deduplication is one of the key components of any modern storage strategy, so let’s look at the elements to consider when selecting an optimal system.

Deduplication is important because industry-standard backup practices inherently create large amounts of duplicate data because the backup system repeatedly copies the same data to secondary storage. By eliminating duplication, companies can keep more data online longer at significantly lower costs and reduce secondary storage requirements.

Here are elements to consider when looking at a deduplication system:

Flexibility
Some data yields better deduplication results than others, so a deduplication solution should allow backup administrators to align deduplication policies with business goals by letting them choose the method that best meets their specific requirements. These options include the following:
  •  Inline deduplication has the primary benefit of minimizing storage requirements. It is ideal for small storage configurations or environments where immediate replication is desired.
  • Post-process deduplication is ideal when the key goal is to back up as quickly as possible. As its name implies, it occurs after the backup process completes, thus it can be scheduled to run at any time.
  • Concurrent deduplication is similar to post-processing, but starts as soon as the first set of records has been written and runs concurrently with backup. Deduplication engines start working immediately, making full use of available CPU resources. This is highly suitable for clustered VTL environments. Replication starts sooner, so data can be quickly recovered from a remote site. One good practice is to use flexible deduplication policies to leverage post-process deduplication for the initial backup (for speed) and then switch to inline for subsequent backups.
  • No deduplication is for data that does not deduplicate effectively or is exported to physical tape. Examples include image files and pre-compressed or encrypted data. Selectively turning off deduplication saves CPU cycles and focuses on deduplicating the data that yields the highest ratios and value.
Rate this item
(0 votes)
What better way to celebrate the end of summer than with an afternoon outside filled with great food, fun games, and lots of laughs? FalconStor did just that at our end of summer BBQ event where employees of the Melville headquarters were able to kick back and relax, enjoying the nice weather and each other’s company.

I have to hand it to my fellow FalconStor marketers, Rachel & Christina, who planned the entire gig and I’d like to thank them for their hard work in making it such an enjoyable occasion. The food was delicious (can you say Thai ginger boneless wings, chipotle BBQ jumbo wings, and Angus sliders? – yum!) and we had a blast at the game stations. It was great to see everyone having fun while letting their competitive sides come out. Ladder Ball and Bean Bag Toss were a big hit, as was the newly popular Frisbee game, KanJam. One coworker even brought his own football to the party and started a pick-up game in the parking lot!

My favorite part of the festivities happened when a few brave employees decided to take on the international dance craze from the Gangnam Style music video (for those of you who have no idea what I’m talking about, check out this Youtube video to get up to speed). The men of FalconStor showed off their unusual (yet impressive) dance moves to the delight of the crowd.

Last but not least, what would a summer party be without a visit from the ice cream man? Mister Softee wrapped up the afternoon soiree, serving out cups and cones of delicious soft serve ice cream for dessert. It really doesn’t get better than that, folks!

Don’t forget to check out all the fun photos on our End of Summer BBQ Flickr set. The dancing ones are sure to please!
Rate this item
(1 Vote)
In part one of this blog, I outlined the problems with traditional data backup and disaster recovery – challenges caused by the evolution of the data center in the context of today’s 24/7 business climate and exponential data growth. Companies can no longer rely on data backups of individual systems and can no longer afford the time it takes to restore IT services from replicated data alone. The time to restore business to full operation is prohibitively expensive in terms of lost productivity and revenue. So what is the solution to this problem? Is there one? Sure, its automated, service-oriented recovery.

Automated disaster recovery takes the long series of complicated steps required by traditional manual recovery methods and automates them. Automated DR understands the specific order, process, and procedures and applies this to the recovery process. It eliminates failure and returns the company back to normal operations within minutes rather than hours or days. Think of it as an insurance policy. You don’t know when the accident will occur, but you have peace of mind knowing that you will be protected when and if it does.

When systems fail in the absence of automation, data protection managers are like the nurses and doctors on “M*A*S*H” when incoming wounded arrive: a server goes down, and IT staff quickly assemble to triage and solve the most pressing problems (I hope I didn’t date myself too much with that reference). With automated disaster recovery, the IT manager can go straight to the automation console, right click on an icon, and start restoration of the server. It removes the human element of the process.

There are a few things I feel that automated disaster recovery also should have: flexible recovery, a way to test recovery before disaster strikes, and the ability to change with a business’s ever-demanding needs.
Rate this item
(0 votes)
If everyone agrees that disaster recovery (DR) is essential to ensuring the availability of critical services that run the business, then why aren’t more people doing it? For starters, DR is time consuming to implement and test and difficult to maintain. Also industry surveys have shown that businesses of all sizes forgo disaster recovery planning for numerous reasons. Many times action is taken only after financial loss or unplanned downtime occurs. In this two-part blog, I’d like to discuss the need for protecting complete data center services (such as e-mail, web servers, and databases), the challenge of setting up and testing recovery systems, and the benefits of automated DR.

Protecting critical business data and ensuring that it is accessible 24 hours a day, seven days a week, year-round is a critical task for IT and data center managers. However, companies have to be concerned about more than just the data. They need to look at the systems and applications that serve the business. IT faces a challenge in providing data protection and DR for an entire infrastructure. The increase in the amount of data that needs protection shows no sign in slowing, while at the same time IT infrastructure is becoming more complex in ways we wouldn’t have dreamt of ten years ago. The current way DR is handled within the data center simply does not work well with new hybrid physical/virtual infrastructures consisting of equipment from multiple vendors running an array of sophisticated services.

Every minute spent trying to recover data and restore a critical service back to full operation is time not spent on doing more important things, like running the business. Most companies cannot tolerate more than a few hours of downtime before the business is seriously impacted. But even if they could tolerate it, why would they? Time to recovery becomes of the essence. Traditional backup methods are composed of hundreds of step that require systems to be rebooted and patches to be applied along the way. Not only is recovery from an actual disaster time consuming and inefficient but so are the necessary tests and DR rehearsals. Most organizations do not test their DR plans because there is too much risk of compromising daily operations.

As virtualization has changed the inherent structure of the data center, the way we do data protection and DR also must change. Single-point protection solutions that back up data from individual servers and applications are no longer adequate. Beyond restoring data, there is too much manual labor involved in putting back together all the pieces of a complete service. Each service needs to be managed and protected as one integrated whole. Today, when each minute of downtime means monetary loss for the business, you need to know that when you need recovery it is going to be fast and work right the first time.

In my next blog, I will talk about the solution to this problem: fully automated, service-oriented DR. It’s your data, your systems, your services, YOUR BUSINESS. Protect it.
Rate this item
(0 votes)
Congratulations to our day four $100 winners: Steve Mansfield, Joe De Paola, Steven Asmussen, Eric Johnson, Emerson Bartolome, and Allan Moskaira! They gave our punching bag their best shots and walked away victorious!

 Ninth_Winner

Tenth_Winner

Eleventh_Winner


Twelfth_Winner

Thirteenth_Winner

Fourteenth_Winner
 
Rate this item
(0 votes)
Day Three of VMworld yielded four more boxing champs! Congratulations to Lauro Salais, Robert Bingham, Michael Broadbent, & Hans Nielsen for their punching prowess!

 Fifth_Winner

Sixth_Winner

Eighth_Winner

Seventh_Winner
 
Rate this item
(1 Vote)
FalconStorWinsBestofVMworld

FalconStor RecoverTrac 2.5 has been chosen as the winner of TechTarget’s Best of VMworld 2012 awards in the business continuity and data protection category. The annual Best of VMworld awards showcases the top technologies of the year as chosen by a group of independent industry experts. FalconStor’s RecoverTrac DR automation tool was selected as a top candidate out of 200 entries for its unique process in performing physical-to-virtual backups and for the continuous data protection that FalconStor solutions offer.

FalconStor launched the latest version of RecoverTrac last week, on August 21st. If you missed the online launch, please visit the RecoverTrac page to watch the recording and learn how the newest features allow you to recover any service, any time, any place.
 
Rate this item
(0 votes)
Congratulations to our Day 2 $100 winners, Mike Gerolami, from Vancouver, and Jeff Fantin, from Phoenix, AZ! Mike & Jeff showed us their best one-two punches on our boxing machine!
Third_Winner

Fourth_Winner
 

Page 4 of 16