Regarding Gary Parker’s recent blog, Deduplication - The Power of Flexibility, Gary discusses the importance of data deduplication and the trade-offs among the various deduplication options that are available in the market.
An interesting point was the comment that “for the highest performance levels, a recommended best practice is to use flexible deduplication policies to leverage post-process deduplication for the initial backup (for speed), and then switch to inline deduplication for subsequent backups.” I would like to expand on that because it is an important element of a good deduplication implementation.
Let’s call it what it is: data deduplication is the waste management system of the storage industry, and just as with any other waste management process, you really need your system to be very efficient. But to start, and just as with any other pandemic, let’s take a look at the symptoms of data duplication! The biggest duplicate producer in today’s IT world is the traditional backup process. Yes, I’m talking about the antiquated, passé, and totally broken batch backup process that produces more data than you can ever get any use for and way less than what you’d really need.
CIOs and their senior leadership teams know that the existing model of backup is consuming them with day-to-day tactical problems. Research confirms that this experience is widespread and indicates that it will lead to major change in the next two years.
Gartner reports that by 2013:
- At least 30 percent of organizations will have changed their primary backup vendor due to frustration over cost, complexity, or capability.
- Fifty percent of midsize organizations will have implemented tiered recovery architectures.
- More than 75 percent of large enterprises will have made similar changes to eliminate their outdated and burdensome backup windows.