Enabling a Next-Generation Platform

RightScale Cloud Journal

Subscribe to RightScale Cloud Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get RightScale Cloud Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

RightScale Cloud Journal Authors: Pat Romanski, Gilad Parann-Nissany, Udayan Banerjee, Cloud Best Practices Network, Elizabeth White

Related Topics: Cloud Computing, Cloud Interoperability, RightScale Cloud Journal, Cloudonomics Journal, CIO, SaaS Journal, Ubuntu Linux Journal, Azure Cloud on Ulitzer, Datacenter Automation

Blog Feed Post

McKinsey, The Cloud, and Fuzzy Calculations

McKinsey also knocks the uptime factor, claiming that enterprises set their own SLAs at 4 9s or higher

There was a report released April 15th by McKinsey called “Clearing the Air on Cloud Computing.”  The premise of the report was that the cloud was actually quite a bit more expensive for large corporations than running their own datacenters.  While it allows a nod to small to medium businesses in stating that the cloud may make sense for them, the top line message was that cloud services overcharge for things that companies could do for themselves.  The piece ends up being a push for virtualization, and knocks Windows as a main cost issue for moving to the cloud.

Report Out
The report starts out with McKinsey’s view on the cloud.  They lay out that the premise for the cloud has been lower cost and faster time to market, but the reality is that these claims are overstated and that “cloud computing” is at the top of the Gartner hype-cycle.

The report takes it one step further to claim that since there is no agreed upon definition for what the “cloud” is (apparently they found a study that found 22 definitions for the “cloud”, which seems low to me considering the conversations I hear at conferences and on news groups), large companies should not think about “internal clouds” but rather focus on the immediate benefits of virtualization of servers, storage and network operations.  They posit that the newness of the cloud is distracting IT departments’ attention from technologies that “actually deliver sizeable benefits; e.g. aggressive virtualization.”

The early part of the report unfortunately spends as much time as many of the conferences are these days on the minutia of what definition is right, and what “the cloud means.”  More than anything, these diversions are tiresome for the observer and confusing for IT managers.  They zero in on the following traits:

  1. Hardware management is abstracted
  2. Capex to opex
  3. Elastic demand for resources.

That sounds like what we presented at the Azure launch at PDC, but far be it from me to ask McKinsey to give Microsoft credit for the definition.

They call Windows Azure a cloud example, and not Azure Services Platform.  This confusion is consistent with customers and press/blogger sentiment that I am seeing.  Windows Azure is a piece of the overall Microsoft cloud play.  It’s an application hosting environment, which serves as the foundation, though not required, layer for other code execution paths in the Azure Services Platform.  One can build applications that live completely on-premises without using Windows Azure, but utilize other pieces of the Azure Services Platform.

They do call out the difference between a cloud and cloud services.  Cloud services has the two key tenets of hardware abstraction and scaling elastically.  The service could run on top of a cloud or not (e.g. SaaS).

McKinsey makes the mistake of confusing operating costs and startup costs.  The use of clouds by small companies is a result of startup costs, cost of capital, and availability of funds.  Those companies are the ones who are not already invested in large datacenters and likely lack the resources to build their own.  Whereas large companies have sunk costs in their datacenters, and will most likely externally claim that their operating costs are much lower than reality.  Over time, as they have to think about expanding and building new data centers with new equipment, large companies will most certainly then be looking at the cloud in much the same way that small companies are now.

McKinsey lays out the four main hurdles to adoption of cloud by large companies:

  1. 1) Financial – cloud is not cost effective compared to large company datacenters (calculations to follow)
  2. 2) Technical – security and reliability concerns, and re-architecting of apps.  I’m not sure about the first two, since they don’t offer any data (in which case, it’s a perception issue).  For re-architecting, this also is confusing.  Since AWS is essentially virtual hosting, you can move your apps to AWS with little to no work.  Azure is a different story, but AWS is the focus of this report.
  3. 3) Operational – Perceptions of IT flexibility have to appropriately managed
  4. 4) Organizational – org changes will be required to operate in a cloud world


The report claims the “typical” enterprise datacenter has the following metrics:

  • · 10% utilization
  • · $20M/MW
  • · $.1 kW-hour
  • · $14K/server (2CPU, 4 cores each CPU)

We finally get into the calculations for large and small/medium companies at slides 23-24.  They don’t show their calculations, but claim that the Total Cost of Assets for this typical datacenter is $45/month for CPU equivalent.  Assuming 36 month depreciation, that $14K server is $48/month.  Doing the math on Amazon’s Reserved pricing (for Linux servers – not available on Windows) yields:


McKinsey’s conclusions are simply wrong.  All of the instances work out to the same pricing per month, but vary depending on your agreed upon term of use (1 year or 3 years).  Importantly, assuming the 3 year depreciation schedule of their $14K server, the equivalent 3 year cost from AWS is $21/month/core.  This pricing does not include bandwidth costs, but I compare it to the $14K server purchase price.

Even more confusing is that on the two slides they have separate EC2 pricing conclusions for small/medium companies and large companies, even though they have the same line of demarcation for what is economical – the $45/s month per CPU month.  The boys at RightScale also take exception to the reporting of the numbers by McKinsey.

Page 25 is where things get interesting.  McKinsey claims that there’s a 144% gap from running one’s own datacenter to complete outsource to AWS (which is an unreasonable premise, as wholesale outsourcing is not the message delivered to any customer from any cloud player).  McKinsey then claim “the key factor is that the majority of servers that can be migrated are Windows servers.”  The implicit claim is that Windows makes AWS more costly.  A CIO takeaway may be “well, we have a ton of Windows boxes, so this won’t make sense.”  It’s true that AWS pre-made images running Windows are more expensive, especially if you include authentication services.  That’s for their pre-made images, and doesn’t take into account customers who have their own VL licensing.

On this same slide, McKinsey only attributes a 10% labor savings from moving to a third party provider.  They don’t substantiate that number, and it feels very light to me.  There is no talk of any of the automation that comes from moving to the cloud and using their tools for scale and elasticity.  Think tools like RightScale or Microsoft Systems Center.

McKinsey also knocks the uptime factor, claiming that enterprises set their own SLAs at 4 9s or higher.  In practice, this number is lower for any enterprise, but they have their own targets.  There are no web sources which track the downtime of enterprise resources, but there are a few for the cloud providers.  McKinsey claims that since AWS SLAs can’t match those of enterprises, enterprises won’t be interested.  There’s no punitive recourse if an IT manager doesn’t hit SLA, except perhaps that he might get fired, but AWS would be on the hook for real monetary damages, necessitating SLAs that are more realistic.  It’s easier to posture and claim you are designed for 4 9s than to say you have signed an SLA for 3 9s with a cloud provider.  4 9s, which is the enterprise target, allows only 52 minutes of downtime per year.  One server reboot a month could put you over that number.

On slides 29-30, McKinsey claims that large enterprises can increase their server utilization rates from 10% to 35% with “best in class, aggressive server virtualization.”  Additional cost controls can be gained, they claim, through adopting data center best practices, yielding TCO savings of 50%.

Finally, they liken the hype around cloud to that of the dot com bubble, and ominously point out that the NASDAQ fell 80% when that one burst, suggesting that CIOs should avoid investing in the cloud hype.

What’s Missing from the Report?

· The report lacks any mention of the massive economies of scale which come from a large cloud provider purchasing equipment.  Further, even things like the cost of power are glossed over, as our own internal $/kW-hour are much lower than those proposed for the “typical” datacenter.

· At present, AWS has near monopoly pricing power in the cloud, and it behooves them to keep those prices high.  With additional competition, prices will come down.

· There is no mention of the speed to market associated with procuring and provisioning servers for any new projects, nor is there any mention of the risk mitigation for new projects.

Additional Links












Read the original blog entry...

More Stories By Brandon Watson

Brandon Watson is Director for Windows Phone 7. He specifically focuses on developers and the developer platform. He rejoined Microsoft in 2008 after nearly a decade on Wall Street and running successful start-ups. He has both an engineering degree and an economics degree from the University of Pennsylvania, as well as an MBA from The Wharton School of Business, and blogs at www.manyniches.com.