I love the data protection space. How many disciplines can be reduced down to two simple metrics? The metrics I speak of are recovery time objective (RTO) and recovery point objective (RPO). These two measures truly break the entire process down to its bare fundamentals:
- RPO refers to data at risk measured in time. For example, an RPO of 60 minutes indicates that you could lose up to an hour’s worth of data or all of the data generated in a 1-hour period. In more simple terms, the RPO is determined by how frequently the backup operation runs.
- RTO is a target. It indicates how much downtime you are willing to suffer before a complete system recovery. An RTO of 180 minutes means you will need to wait 3 hours before you are up and running again.
Although "The times, they are a changin" are the lyrics to a real old Bob Dylan song, they sure seem to be prophetic as they pertain to the storage industry today. In just a few short years, we have seen the fall of Fibre Channel and the introduction of SAS storage as the mainstay of SAN storage.
Tape backup is still the most prevalent technology in place for dealing with data protection, and array based replication is still the most common method of protection for mission critical applications, but companies are really starting to like the idea of saving money by innovating in how they attack the largest ongoing IT costs, which is still backup and disaster recovery.
Those of you who were around in the storage industry back in the late 90's probably remember the amazing rise and fall of storage as a service, or SaaS. There was real buzz in the industry back then around outsourcing storage resources to third parties, and how it would enable companies to focus on their core business rather than IT.
The term virtualization has been over used and over hyped by many companies, and this misuse of the term virtualization has caused some confusion. Simply put, virtualization means “abstraction”. The virtualization solution abstracts the underlying details and complexity of whatever it is virtualizing.
Data deduplication is one of the most obvious choices for reducing overall infrastructure costs within the data center, which also reduces power, cooling, and floorspace requirements for IT. Data deduplication at the file level (unstructured data) can be used to reduce duplicates within production storage.
In part 1, I discussed how to leverage Virtual Tape to green the datacenter and the environment. In this part, I will focus on how storage virtualization can help reduce power consumption and datacenter floor space requirements.
One of the hottest topics in IT these days is "Green". When the term green is used in reference to IT, it usually means more than just being environmentally friendly. For Information Technology, green also means needing less money to pay the bills for power, cooling, datacenter floor space, and the gas needed to ship tapes back and forth between the datacenter and offsite storage or DR location.
A colleauge of mine brought this little gem to my attention, and in the interest of sharing, I thought ya'll would also find it interesting. http://www.channelregister.co.uk/2009/01/02/it_trends_2009_fforecast/ Quick Excerpt: "Just about every major company funds a junkyard of application systems and technologies attached to them.
In order to achieve greater efficiencies for IT investments, most IT models leverage the Information Technology Information Library (ITIL) standards model when implementing solutions within the organization.