The Economist : Clouds On The Horizon
Friday, October 24, 2008
Cloud Computing And SaaS
…..Cloud computing in essence, takes the idea of distributed computing a step farther. It adds a couple of layers to the IT stack. One is made up of the cloud providers, such as Amazon and Google. The other is software that helps firms to turn their IT infrastructure into their own cloud, known as a “virtual operating system for data centres”…..Software vendors will have to find new ways to charge for their wares: in the cloud, tying licensing fees to the number of users, for instance, will be difficult, since services will mostly be consumed by other machines. More importantly, the corporate world has become less and less willing to buy software for large sums of money, so software firms listed on America’s stockmarkets now make most of their profits from maintenance and other services. SAP will increase its annual maintenance fees to at least 22% of a program’s value over the next few years, in line with those of Oracle, its main rival…..
…..Once Salesforce and NetSuite had shown that the SaaS model works, the incumbents began to move faster. In September last year, for instance, SAP presented “Business ByDesign”, a package of web-based enterprise applications for smaller businesses. But success will not come easily. SAP has slowed down the introduction of the new service because it still needs to work out how to run it cheaply enough to make a reasonable profit. Pure SaaS providers also have a lot on their minds. Some experts, such as Joshua Greenbaum of Enterprise Applications Consulting, reckon that few will ever be as profitable as traditional software firms. Although it is almost a decade old, Salesforce started making money only in 2006, mainly because it first had to spend heavily on marketing to attract customers. But now that the service has 1m users and revenues of more than $1 billion, these costs will come down, says the firm…..Even if the cloud is likely to transform the IT industry, some things will stay the same. One is the importance of lock-in. If anything, companies and developers will be even more dependent on cloud platforms and applications than they are on the old kind. SaaS promotes the “hollowing out” of IT: a firm that needs to migrate to another system will no longer have the required expertise. When Facebook, say, makes a change to its platform, developers have no choice but to go along with it. Some are already calling for a “Cloud Computing Consortium”, in the mould of the World Wide Web Consortium (W3C), to set standards that allow applications to migrate easily from one platform to another. One standard initiative, called “OpenSocial”, already allows the same web-based application to run in several social networks, which are also clouds of sorts. But standards go only so far. Some fear that one company could try to monopolise other key parts of the cloud; ironically, Microsoft worries that Google is doing exactly that with the online advertising market. To Steve Ballmer, Microsoft’s boss, Google’s advertising platform is like a flywheel that picks up speed as more websites attract more advertisers, and vice versa. Eric Schmidt, Google’s chief executive, denies any evil intent to achieve world domination. He argues, with some justice, that it would be hard for Google to control the cloud, if only for technical reasons: much of it is already based on open standards, and its loose structure does not lend itself to locking customers in. Mr Schmidt promises that Google will not lock its users in either. “Our competitive advantage is not from lock-in”, he says, “but from having specialised knowledge of how to build data centres and how to build new software that is not reproducible, such as our search algorithm. This is how we make our money.”…..
Cloud Computing And SOA
…..More and more software will become a service delivered online. More importantly, applications, web-based or not, will no longer come as a big chunk of software, but will be made up of a combination of electronic services—a shift that has picked up a lot of speed since computing began moving into the cloud. To understand this new way of building applications, known as “service-oriented architecture” (SOA), think of a culinary analogy. Whereas the old chunk of software resembles a precooked meal that just has to be popped into the oven, the new architecture is more like a restaurant. It is a service in itself but also a combination of sub-services. There is the waiter who takes the order and conveys it to the kitchen. There is the cook who prepares the food. And there are the cleaners who keep the place tidy. Together they create the “application”: a restaurant. The importance of this shift from a monolithic product to services is hard to overstate. In a sense, it has seeded the cloud, allowing the droplets—the services that make up the electronic vapour—to form. It will allow computing to expand in all directions and serve ever more users. The new architecture also helps the less technically minded to shape their own clouds…..Just as for the industrialisation of data centres, there is a historic precedent for this shift in architecture: the invention of movable type in the 15th century. At the time, printing itself was not a new idea. But it was Gutenberg and his collaborators who thought up the technologies needed to make printing available on a mass scale, creating letters made of metal that could be quickly assembled and re-used. Similarly, the concept of modularity has been around since the early days of computing. “Everything in computer science is to just write less code. What is the technique for writing less code? It’s called subroutines,” said Bill Gates, Microsoft’s founder, in a recent interview. A subroutine is a part of a program that can be re-used, just like movable type. The idea, says Mr Gates, has always been to apply this principle of a subroutine more broadly. Yet this did not happen, mainly because the cost of computing fell much faster than that of communications. Ever cheaper and more powerful chips made it possible to move from mainframes to minicomputers to personal computers (PCs) and now to hand-held devices. But connecting all these pieces remained difficult and expensive, which meant that such devices all had to come with their own data and chunky programs. Now, thanks to plenty of cheap bandwidth and more and more wireless connectivity, computing is able to regroup into specialised services, or Mr Gates’s subroutines: “We now live in a world where…[a] subroutine can exist on another computer across the internet.” Part of Gutenberg’s genius was to recognise the need for all the letters to be identical in height so they could be easily combined. Similarly, for computing services to work there had to be robust technical standards. Only a few years ago this seemed far beyond the IT industry’s reach. Most firms insisted on their proprietary technology, mostly to lock in their customers. Again, cheaper communications helped to bring about change. The success of the internet demonstrated the huge benefits of open standards and forced vendors to agree on common ways for their wares to work together. One result is a stack of something called “web-services” standards. Service-oriented architecture first showed up in open-source software but was quickly adopted by big enterprise-software vendors because they had a pressing need for it, says Jim Shepherd of AMR Research, a consultancy. Big software vendors, for instance, had to find a way to untangle the hairball of code that their products had become, or else they themselves would choke on it. Customers wanted more flexible and extensible programs. Think back to the gastronomic example. A precooked meal is hard to change, and so are traditional software applications. By contrast, a restaurant can easily change its menu and its style of operation. Similarly, SOA-based software allows companies to alter their business processes, such as the way they handle orders to collect cash…..Despite millions of dollars spent on marketing SOA, it has not really taken off yet. But many web-based applications for consumers rely on this concept. The prime example is Google Maps. When the online giant launched the service, programmers quickly figured out how to mix the maps with other sources of information…..Yet it is unlikely that the software cloud will end up as a vast nebula of thousands of specialised services. Even creating a service-oriented architecture is “no silver bullet” against complexity, in a famous phrase by Frederick Brooks, an elder of computer science. Although web services allow online offerings to connect, for instance, it is costly to synchronise their data. And it would not make sense for any firm to bet its business on simple mash-ups. As software markets mature, they tend to form two kinds of clumps: integrated suites of applications, and platforms on top of which others can build programs. Both forms are already emerging. On the applications side there is Google Apps and Zoho, which is even more comprehensive. It encompasses a total of 18 applications, including word processing, project management and customer-relationship management (CRM). As for platforms, there are already plenty, in different shapes and sizes. For enterprise applications, SAP has built one called Netweaver. Oracle offers something similar called Fusion. Last year, Salesforce launched a “platform as a service”, allowing other firms to use the plumbing that supports its own CRM offering. More recently platforms for consumer services have been proliferating. Facebook, a social network, was the first to become one in 2007. Other big online firms have followed suit or will do so soon: Google with App Engine, Yahoo! with Y!OS and Microsoft with a “cloud operating system” thought to be called Windows Strata. Some predict a platform war to rival the epic fights between Microsoft’s Windows and Apple’s Macintosh…..
Cloud Computing And Data Centers
…..Most corporate data centres today house armies of “systems administrators”, the craftsmen of the information age. There are an estimated 7,000 such data centres in America alone, most of them one-off designs that have grown over the years, reflecting the history of both technology and the particular use to which it is being put. It is no surprise that they are egregiously inefficient. On average only 6% of server capacity is used, according to a study by McKinsey, a consultancy, and the Uptime Institute, a think-tank. Nearly 30% are no longer in use at all, but no one has bothered to remove them. Often nobody knows which application is running on which server. A widely used method to find out is: “Let’s pull the plug and see who calls.”…..Limited technology and misplaced incentives are to blame. Windows, the most pervasive operating system used in data centres, allows only one application to run on any one server because otherwise it might crash. So IT departments just kept adding machines when new applications were needed, leading to a condition known as “server sprawl”. This made sense at the time: servers were cheap, and ever-rising electricity bills were generally charged to a company’s facilities budget rather than to IT.
To understand the technology needed to industrialise data centres, it helps to look at the history of electricity. It was only after the widespread deployment of the “rotary converter”, a device that transforms one kind of current into another, that different power plants and generators could be assembled into a universal grid. Similarly, a technology called “virtualisation” now allows physically separate computer systems to act as one. The origins of virtualisation go back to the 1960s, when IBM developed the technology so that its customers could make better use of their mainframes. Yet it lingered in obscurity until VMware, now one of the world’s biggest software firms, applied it to the commodity computers in today’s data centres. It did that by developing a small program called hypervisor, a sort of electronic traffic cop that controls access to a computer’s processor and memory. It allows servers to be split into several “virtual machines”, each of which can run its own operating system and application. “In a way, we’re cleaning up Microsoft’s sins,” says Paul Maritz, VMware’s boss and a Microsoft veteran, “and in doing so we’re separating the computing workload from the hardware.” Once computers have become more or less disembodied, all sorts of possibilities open up. Virtual machines can be fired up in minutes. They can be moved around while running, perhaps to concentrate them on one server to save energy. They can have an identical twin which takes over should the original fail. And they can be sold prepackaged as “virtual appliances”. VMware and its competitors, which now include Microsoft, hope eventually to turn a data centre—or even several of them—into a single pool of computing, storage and networking resources that can be allocated as needed. Such a “real-time infrastructure”, as Thomas Bittman of Gartner calls it, is still years off. But the necessary software is starting to become available. In September, for instance, VMware launched a new “virtual data-centre operating system”. Perhaps surprisingly, it is Amazon, a big online retailer, that shows where things are heading. In 2006 it started offering a computing utility called Amazon Web Services (AWS). Anybody with a credit card can start, say, a virtual machine on Amazon’s vast computer system to run an application, such as a web-based service. Developers can quickly add extra machines when needed and shut them down if there is no demand (which is why the utility is called Elastic Computing Cloud, or EC2). And the service is cheap: a virtual machine, for instance, starts at 10 cents per hour. If Amazon has become a cloud-computing pioneer, it is because it sees itself as a technology company. As it branched out into more and more retail categories, it had to develop a sophisticated computing platform which it is now offering as a service for a fee. “Of course this has nothing to do with selling books,” says Adam Selipsky, in charge of product management at AWS, “but it has a lot to do with the same technology we are using to sell books.” Yet Amazon is not the only big online company to offer the use of industrial-scale data centres. Google is said to be operating a global network of about three dozen data centres loaded with more than 2m servers (although it will not confirm this). Microsoft is investing billions and adding up to 35,000 servers a month. Other internet giants, such as Yahoo!, are also busy building huge server farms…..So IDC thinks that many data centres will be consolidated and given a big makeover. The industry itself is taking the lead. For example, Hewlett-Packard (HP) used to have 85 data centres with 19,000 IT workers worldwide, but expects to cut this down to six facilities in America with just 8,000 employees by the end of this year, reducing its IT budget from 4% to 2% of revenue. Other large organisations are following suit. Using VMware’s software, BT, a telecoms firm, has cut the number of servers in its 57 data centres across the world from 16,000 to 10,000 yet increased their workload. The US Marine Corps is reducing the number of its IT sites from 175 to about 100. Both organisations are also starting to build internal clouds so they can move applications around. Ever more firms are expected to start building similar in-house, or “private”, clouds. The current economic malaise may speed up this trend as companies strive to become more efficient. But to what extent will companies outsource their computing to “public” clouds, such as Amazon’s? James Staten of Forrester Research, a market-research firm, says the economics are compelling, particularly for smaller firms. Cloud providers, he says, have more expertise in running data centres and benefit from a larger infrastructure. Yet many firms will not let company data float around in a public cloud where they could end up in the wrong hands. The conclusion of this report will consider the question of security in more detail. It does not help that Amazon and Google recently made headlines with service interruptions. Few cloud providers today offer any assurances on things like continuity of service or security (called “service-level agreements”, or SLAs) or take on liability to back them up. As a result, says Mr Staten, cloud computing has not yet moved much beyond the early-adopter phase, meaning that only a few of the bigger companies are using it, and then only for projects that do not critically affect their business. The Washington Post, for instance, used Amazon’s AWS to turn Hillary Clinton’s White House schedule during her husband’s time in office, with more than 17,000 pages, into a searchable database within 24 hours. NASDAQ uses it to power its service providing historical stockmarket information, called Market Replay. Stefan van Overtveldt, the man in charge of transforming BT’s IT infrastructure, thinks that to attract more customers, service providers will have to offer “virtual private clouds”, fenced off within a public cloud. BT plans to offer these as a service for firms that quickly need extra capacity. So there will be not just one cloud but a number of different sorts: private ones and public ones, which themselves will divide into general-purpose and specialised ones. Cisco, a leading maker of networking gear, is already talking of an “intercloud”, a federation of all kinds of clouds, in the same way that the internet is a network of networks. And all of those clouds will be full of applications and services.