Nobody Ever Got Fired for Buying Hyperscale... Until They Did

November 19, 2018 5 min Read

One of the oldest and most well-known sayings in IT is “Nobody ever got fired for buying IBM.” The meaning is simple. Many IT leaders choose to play it safe when selecting vendors or products. By buying well-recognized brands, they feel that they are making a safe decision that won’t get them fired. Sadly, the same thinking is still prevalent today when it comes to purchasing cloud computing services even though it’s no longer valid. It’s not unheard of for an IT leader to select one of the big brands like Amazon Web Services (AWS) or Microsoft Azure for their cloud infrastructure without even doing the proper due diligence. And in many cases, the due diligence would reveal that a hyperscale cloud provider may be far from ideal for enterprise applications.

Certainly, the allure of hyperscale is understandable. Many organizations have success moving their application development efforts to AWS and Azure. The ability to easily spin up and spin down workloads perfectly suits the development environment. New software development tools that take advantage of multiple regions and proprietary, cloud-native features help developers deliver feature-rich applications that are resilient and highly available.

In these cases, I get it. Hyperscale cloud environments are perfect for maintaining and delivering cloud-native applications to end users. However, many organizations have also applied this logic to mission-critical production non-cloud-native applications without thorough cost analysis, performance testing, or administrator training. Without these key preparations, results can be unpredictable at best.

Mission-critical production applications that are not cloud-native are dependent upon underlying hardware for speed, resilience, and availability. In most cases, operating these non-cloud-native applications on hyperscale clouds will require provisioning double the resources to establish the same level of resilience inherently available in a VMware-based enterprise cloud.

Key Indicators of a Bad Fit

  • Slow performance Hyperscale clouds typically offer multiple server configurations; think in terms of T-shirt sizes (e.g. small, medium, large or extra-large). Based on application requirements, systems engineers select the server size according to standardized specs. With non-cloud-native applications, it’s common for large or even extra-large hyperscale servers to perform worse than traditional in-house or hosted infrastructure.
  • High or unpredictable costs Many hyperscale providers charge for one-way network traffic. Since existing applications were never written with network traffic control in mind, organizations find that they burn through their telecom spend much faster than predicted when applications that aren’t cloud-native are hosted in the cloud.
  • Significant skill gaps These days, most organizations run VMware hypervisors to manage multiple Windows and Linux virtual machines per server. Yet the hyperscale clouds run their own proprietary hypervisors, meaning that systems administrators who move non-native applications to the hyperscale cloud must abandon their established VMware skill sets and learn how to manage these proprietary hypervisors on the fly. Although AWS is now offering VMware as an option, the point of entry is so high that small to mid-sized enterprise clients may be too small to use VMware on AWS.

It’s Simple. Choose the Right Cloud and Keep Your Job

So with all of this risk with the big-name brands, how do you avoid getting fired when making your cloud infrastructure decisions? Start by choosing the right cloud environment for the job. For your mission-critical VMware-based Windows and Linux applications that have been serving the organization effectively for years, the right platform is probably a VMware-based enterprise cloud. But not all VMware-based clouds are created equal. The devil is in the details. Consider these key components:

  1. Speed Is it fast enough? High performance is only achieved through a combination of architecture and no over-subscription of cloud resources.
  2. Cost Model How does the cost model work? Cost models vary significantly between providers.
  3. Data Protection What data protection methods are available, and how automated are those recoveries?
  4. Skill Gaps Is the environment familiar enough for your existing IT staff to manage with little or no additional training?
  5. Financial Independence – What is your cloud provider’s financial position? Does it allow them to focus on their customers first, or are they more concerned with pleasing shareholders or private equity owners?

To choose the right cloud for the job, these factors must all be considered during the purchase process. Want to know what other companies take into consideration in their evaluation process? See how Prodigo Solutions selected the right cloud provider for their business needs.

Doug Theis is the Director of Market Strategy in Expedient’s Indianapolis market focused on engaging with and improving the regional IT community through planning, sponsoring and attending community events, facilitating IT-focused continuing education opportunities, and sharing strategies, tactics, and research to help IT professionals stay abreast of best practices and industry trends. Connect with Doug at doug.theis@expedient.com, and follow him on Twitter.

Doug Theis Doug Theis

Subscribe to Our Blog

Big Picture
December 12th, 2024 at 1:00 PM ET

Big Picture: Putting 2024 in Perspective

Join us

Big Picture

December 12th, 2024 at 1:00 PM ET

Big Picture: Putting 2024 in Perspective

Join us