Falconstor Community

You are here: Blog RSS (Atom)

FalconStor Blogs (223)

Rate this item
(0 votes)

Early this month I read a story about an insurance company forced to pay damages of $250,000 for losing customer data; private information of 1.5 million end customers.  Given our litigious society, I think this type of legal behavior will only increase over time.  Liability, or blame, for lost data, or lost business, is an intangible cost of data protection.  As I frequently state, I’m in the data recovery business; or the business preservation business.  Anybody can provide some type of data protection, the true measure is speed of recovery.  I could care less what you call a solution, so long as my revenue or business operations can quickly continue.  This simple concept involves many complex technologies that must accommodate many different IT environmental factors.  It is these complexities, and environmental differences, that quickly increase the costs of most data protection solutions.

Rate this item
(0 votes)

I regularly interact with IT people, at a variety of events or meetings, and I am often surprised by the data points of information I hear.  First of all, they are uniformly busy with their jobs, projects and emergencies, and they don’t have the time to keep up to date on all associated technologies. 

As long as their current inventory meets their business needs, they have more than enough to do in a day.  When I ask about data protection and recovery, the common answer is  that the status quo is not adequate, but they are too busy to change.  Nevertheless, recovery by tape backup and transaction logs is the most common answer I hear.  My conclusion to this relevant statistic is that traditional backup software maintains the predominant solution in the data protection space.

Rate this item
(0 votes)

When we talk about disaster recovery (DR), we’re often preparing for the possibility of routine power disruption or internal system failure. But as its namesake implies, DR is also about your data surviving real disasters – the kind mother nature delivers with little warning, the kind that swirl and churn and gain strength over the ocean before moving inland over cities and towns and data centers.

This is the kind of disaster I’m thinking about right now as the 2010 hurricane season opened with an unusually strong first storm.Today is Wednesday, 7 July; here in Texas where I live.Last week we had Hurricane Alex which, thankfully, missed Texas but brought severe rain and damage to northern Mexico; including over 20 inches of rain in Monterrey.For those of you not familiar with Mexico, Monterrey is the third largest city with significant industrial, financial and technology business.Now NOAA (National Hurricane Center) is tracking another tropical cyclone with a 50% chance developing into something larger; and of hitting Texas.

Rate this item
(0 votes)

The deployment complexity and ongoing costs of DR implementations sometimes cause SMBs and even large enterprises to postpone or limit their DR plans, resulting in the adoption of less expensive and less effective data protection tactics. Many DR plans today depend on restoring from backups! This dependence in itself is a disaster waiting to happen. Even a partial data center restore from tape will take days if not weeks. No business today can afford to shut down operations for a day, never mind a week! Here is a great source of information for DR planning and implementation.

But again what are the alternatives? When it comes to DR, the most basic process is data movement; this is why many organizations depend on tape to move data offsite, but this model will never provide rapid and effective DR execution. The other way is replication, the most expensive component of a DR plan. Data replication can be performed at the host, network or the storage layer, and each has its own advantages. In this post I’ll focus on storage layer replication, as it is the most common replication method for DR implementations.

Rate this item
(0 votes)

Last year was a significant one for virtualized servers in the government-agency data center. Many such organizations took advantage of the opportunity to make better use of existing budgets by consolidating physical infrastructure and commoditizing server hardware.

What many of those agencies have yet to act on, though, is the other half of the story.

In 2010, these organizations will move to address the data aspects of information technology and the value of storage virtualization.There are real economic savings to be had here for any IT team that recognizes its job did not end with server virtualization. I actually wrote an article in Government Computer News that appeared this month on this very subject:"Why 2010 is the year of the virtual data center."

Storage virtualization is accomplished by creating a virtual abstraction layer between physical or virtual servers and the existing physical storage. Once the storage is virtualized and the physical location of data becomes abstracted from the hosts, some interesting opportunities for savings become apparent:

Page 19 of 25

RSS