But as I mentioned here in May, converting entire infrastructures to SSD doesn’t make a lot of sense for most data centers today.Partial adoption does, and that is what Matt Prigge advocated in his “Information Overload” column in InfoWorld last week.
Prigge notes that “capacity and cost are still significant limiting factors to widespread adoption” of SSD, but that there is a “right way” to deploy these devices for actual ROI.Prigge advocates a hybrid option in which the most often used data is moved to the SSD and the traditional disk handles the rest. This, he says, bridges the “capacity-versus-performance gap” at a price point most organizations can swallow.
Any hybrid solution that requires manual movement of the data to the SSD tier is going to suck up resources and eat capacity.And any eventual gains in speed will be diminished by the hassle of this constant migration demand.There are several vendors banking on this approach, but their users will certainly see additional consumption of CPU cycles and frustrated IT staff scrambling to predict which data stores will be most used at any given time.At the Green Data Center Conference last week in San Diego, one of the attendants talked about the benefits that he’d seen from implementing SSDs 10 years ago; even at that time the benefits outweighed the cost, but his biggest challenge was management that resulted in abandoning the project.
The growing number of organizations moving toward SSD would be better served by technology that intuits which data sets to copy to the high-performance layer based on the access pattern of that data set and the rank of the application requesting it.This is not only possible, it’s in use now and is enabling numerous companies to cache data to the high-performance solid-state tier and quickly react to heavy I/O requests. At FalconStor we firmly believe that this capability is today’s best option to adopt SSD cautiously, partially and intelligently.