BACK TO BASICS: WHAT CAN BE LEARNED FROM FOUNDATIONAL TECHNOLOGIES?
By Pete McCallum
I was taking a wander through the internet and for some reason searched for “Windows 3.1 architecture” in Google. Perhaps I was feeling nostalgic for the days when I was first dabbling in computers, and swapping floppy disks instead of tossing a ball around with my friends. I think, more likely, that I was (and am) feeling that the current state of the datacenter is simply wading back into old territory and re-discovering the foundations of modern IT. Don’t get me wrong: this is not a bad thing! Far from it. But it does put a pause on technology movement for a little while. After you read this blog, open this link in another tab and read it ( https://en.wikipedia.org/wiki/Windows_NT_3.1 ). Focus on the “Operating System Goals” section. I had to laugh after doing some research on the difference between hypervisors and containers. I’m sure I could draw your attention back to similar legacy technologies like DEC/VAX architectures and we’ll all have a good chuckle about just how modern our hypervisors really are. All in all, the goals have been the same in our datacenter architecture: portability, reliability, and personality. I personally really love the last one.
So let’s just put it out there: there is nothing new in technology today. Erasure coding is an improvement on RAID. Calm down. I said it, and won’t take it back. All-flash is a hilarious marketing ploy to get you to think that slapping a bunch of RAM in a disk format and making that same SAN you’ve been using all-new is something exciting. Fast is exciting but is not new by a long stretch. Hypervisors added some kind of new take on mainframe workload sharing and isolation of workloads. Containers really follow the Windows NT architecture of shared-kernel workload tokenization. But these new names!!! Oh for the love of marketing. Converged and Hyper-converged are just more compact reference architectures. Not to exclude my own product: virtualization is a detached HAL. There, I said it. Middleware for hardware with some protocol routing thrown in for portability.
Even Big Data is more of a social commentary than a datacenter “thing.” We generate and keep everything. Digital packrats. At one point, there was someone who had a handle on where everything was, but all of our movement in the direction of agility and automation have opened Pandora’s Box of data overload if you are interested ( https://en.wikipedia.org/wiki/Pandora%27s_box ). Analytics is a real thing, but by no means new. Enterprise service bus technologies and BI has been around for a long time, and is now, reactively, moving into the sprawl of infrastructure. Let’s talk about that, too.
One somewhat new aspect of technology is location, which falls directly under the portability goals of our modern IT thought. Once we erase the variability of technology choices, we are left with a distributed workforce, nearly infinite endpoints, global presence, nearly limitless choices in vendors and variants of technology interconnectedness. Whew. That’s a lot of scrabble words. It is increasingly apparent to me that we are still struggling to adhere to our three goals: portability, reliability, and personality.
Someone asked me recently at a show what I would spend my money on if I were an IT Director today. My real reaction was: nothing. I wouldn’t spend a dime until something interesting comes along. But that is rather myopic. I would invest in any technology that can provide me portability of my data, agility in choice, protect my business, and match my agenda and requirements. I would avoid any technology that tells me how and when to deploy, expand, or contract. I would avoid a technology that locks me into a given location. I would invest in anything that keeps track of what is growing or creeping through my data. I would invest in a product that talks to and watches over everything I have today, and has the vision to encompass future technology. Otherwise, what am I investing in?
Then this antagonist says to me: “Does your product do these things? Why should I settle for anything less?”
In order to answer that very deep question, we must dispassionately evaluate against some criteria. I cannot be agnostic if I am an evangelist, now can I be? So I look at FreeStor and remove the marketing terminology and all the neat things we think it does. Let me assess it against the three goals set way back when: portability, reliability, and personality.
Will FreeStor accomplish portability of my data? On-premise and on-cloud? I would emphatically say “yes!”. Will FreeStor provision, protect, and mobilize any workload? I would also say “Yes!” to this one (should I mention we still support most flavors of UNIX in addition to x86?). Will FreeStor work with any current and future compute technology? Let’s face it, we can only build as fast as things are presented to us, but as much as possible, “Yes!” From physical to hypervisor, to cloud, we can run in or around each of these topologies with more being announced every day (so far no containers, but it’s coming!). Perhaps most importantly, can your product follow the dream of Software-Defined-anything and truly run on any platform without concern (see reliability)? Can I support my DevOps initiatives with consistent operations across vendor silos? I’m feeling pretty good about portability – it’s in the fiber of the product.
Will FreeStor provide reliability for my business? There has long been a misconception (misdirection?) in our industry that redundancy is the same thing as availability. Does having two or more copies of my data mean I’m safer? Does having a backup mean my business won’t be impacted by a disaster? Alternately, does reliability mean that when I go to install something new, it will work as advertised? From a business perspective, reliability may mean that when a customer asks for something to happen, I am ready to service the request in a timely fashion. Does my solution become more or less reliable if I rely on a 3rd party to operate? I think all of these are valid questions and scenarios, and FreeStor has been built from the ground up to address both redundancy and availability of data and business. A platform like ours has to be able to get lost things back, restore function, and avoid loss/downtime in the process.
Now we come to personality and what it means to you. Can FreeStor maintain multiple personalities to match and meet the personality of my business needs at this point-in-time. The term personality was used in that wiki article I pointed out to you to indicate the operating system being able to support multiple different workloads that were once restricted to a specific (foreign) platform. Woah. Can FreeStor support multiple personalities? If you’ve ever met our company leadership, you’d know that’s a “Yes!!!” All joking aside, FreeStor has been designed to not only adjust to individual workloads from a provisioning and protection (application/data-aware, anyone? Yes!) perspective but from a performance and availability perspective as well. FreeStor can operate as local disk, raw disk, cloud resources, virtual disks, each with different profiles and capabilities even to the same machine. FreeStor can jump protocols and form factors, operating as both virtual and physical resources as needed for the workload. So, Yes, FreeStor has tons of personality.
I did make a comment outside of the big three that I thought I should pull together before moving on: the concept of a data awareness. It has been said by many that one of the reasons storage hasn’t really moved as much as other realms of technology because of two key elements: It’s far easier to do storage wrong than it is right AND data is just too dang important to mess with. Couple those two factors in with the massive explosion in sheer volume of data and the only reason we can find anything is the speed at which modern compute can operate. FreeStor added a layer of full-stack analytics into the storage layer to assist with all aspects of storage operations. While not innovation in and of itself, the goal is to make use of all the performance and utilization data, coupled with rich metadata to optimize operations across any underlying or client environment. So not only do we know what our data has done and where it has been, we can determine where it should be before it needs to be there. A tall order for sure, but I believe it’s where storage needs to be.
All in all, we’ve found that while 3D NAND and 80GB iSCSI are really neat improvements, I couldn’t look you in the eye and tell you that they will fundamentally change how you do business in and of themselves. I couldn’t tell you that building a Hadoop-based data warehouse will allow you to understand your business any better than if you ran a really great spreadsheet. But I can tell you that just because something works for Ebay does not mean it will work at a scale for your local dentist’s office. There is room for all the little permutations and capabilities of technology. However, in the end, when you type your name into a field on a web form, or save that word document, you need to know it is committed to something persistent, and it will be there when you get back to it. No matter what technology that data is stored to, it must maintain portability, reliability, and personality – coupled with awareness – in order to be of value.
So in a blink of my eye, I respond to this show-goer at my booth: “I may not be all things to all people, but I can be something pretty amazing for you if you give me a shot to show you.”
And I think that’s pretty refreshing in today’s technology ecosystem.