Recently I was listening to an Earth Day interviewee claim that nuclear power, despite its shortcomings, was still strongly advocated by corporations and government agencies mostly because nuclear energy is centrally controlled. Why else go to such elaborate lengths to boil water? Central control means fewer people own the pie and so gain a larger proportion of money. Conversely, solar, wind, and hydrogen are largely decentralized, effectively obsoleting the business of large, centrally controlled power organizations.
We know that distributing work in a decentralized manner, amongst many things, is a good idea. The Internet was born from this thinking, by design–highly tolerant of any small or even large segments failing. The military knows that relying on central control makes you both vulnerable and dependent. So the Internet requires no central authority to operate in any fundamental sense. If a failure occurs, it routes around that failure. This is the aspect, ironically created through military funding, that now physically embodies democracy–disparate entities functioning together loosely as a greater whole, both individually free and collectively resilient.
It was not always so. Just a couple decades ago, Apple created the famous commercial where the beautiful and free “new order” smashed the tyranny of Big Brother and his centrally kowtowing minions. IBM mainframes, the huge repositories of centrally controlled information, were the mainstay of corporate and government life. When they failed, everything stopped. Your only choice was to call IBM, whose agents arrived en masse, unsettlingly dressed all alike in creepy dark suits to set things right; so business carries on. As long as you purchased the right plan….
When Apple came along with computers for humans, or “end users” in corporate IBM-speak, IBM realized their business model must change. They already had branched into “distributed computing” by installing smaller mainframes at customers’ satellite companies that fed into larger, central mainframes. Now it was just a matter of embracing these “personal” computers as well. Although centralized power resisted distributing processing to end users, mostly by the technorati themselves, and doomsaying abounded, the newly freed employees could finally have their way with their own information, and productivity soared. People could get what they needed, when they needed it, change it into any form they could imagine, and were no longer wholly dependent upon centralized resources and control.
Yet strangely, a trend seems to be moving us back toward the centralized control of information processing, glitteringly re-branded as some amorphous “cloud.” The reality is, this cloud is really just a collection of CPUs and storage devices, very much the same as any latter-day mainframe. In essence, the big Old Iron has returned, and we’re eagerly handing our data processing capabilities right over to it. And it’s not even our mainframe any more. It’s someone else’s. Some might say it’s not a mainframe, but a cluster. A collection of CPUs and memory that have access to large and fast data storage and retrieval. Those people need to take another look at what latter-day mainframes are.
Even if we do get past the cloud of marketing and look at using another company’s data processing services, certain realities remain: Maintaining 100 percent uptime is a holy grail. Despite all the effort and cleverness a systems engineer will devote to maintaining uptime, the fact is, we are returning to a single point of failure every time we put something on the cloud, unless we are using the cloud as merely a supplementary or backup mechanism, or have those mechanisms ourselves as backup. And there is little, if any, transparency.
Even several days after a major failure of the largest cloud, no detailed information has been provided about what actually went wrong, nor what is being done to mitigate such an incident in the future. Even IBM in the days of the old iron would provide immediate and ongoing detailed status reports. But “the cloud”…who knows? Right?
One last thing to consider other than central points of failure, and their accompanying points of performance limitations and benefits, is that using another company’s mainframes creates a single point of access for increased government access and control. When everything is on the cloud, the government needs only to deal with one company–one ring to rule them all, so to speak. During the infamous illegal government wiretapping case that broke during the Bush era, the government compelled AT&T to allow access to our communications by forcibly bringing all data into one hub in San Francisco, so they could snoop. Using the centralized old iron model makes this government behavior simple, whereas the distributed model once again points us toward democratization.
As the dust settles from this failure, the spin, which will be dutifully echoed by all the tech heads currently ensorcelled with the cloud computing moniker, will be that there is nothing wrong with cloud computing. In fact, it is user error–the customers who were too cheap to purchase a second or third redundant site at another data center (or region) deserved what they got. And strangely, they won’t even notice this implies multiple clouds, nor will it raise any questions as to how this cloud differs, in essence, from any well-managed colo rental space.
If anything comes of this, perhaps people might start saying the plural clouds instead of the singular, amorphous cloud. I doubt it. It’s one of those sensationally brilliant marketing accidents that is perpetually reinforced by throngs of parrots. What we must learn is to start asking the question once again: Who are we renting our servers from, and who are we giving our, and our customer’s data to? And why?
Perhaps cloud fans would find Eucalyptus interesting.